Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 308/1698 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Display database content which is last inserted using last inserted ID

    - by user2330772
    What i am doing is i am displaying last inserted data when form data is submitted,the form is multipart/form-data. I am getting this form data using jquery,here i am sending this data to php file using Ajax POST.In that php file i am inserting that data in db table..where i am getting the id of inserted data..on success of Ajax call i am sending that id to another PHP file..where using that id i am displaying the last inserted data... My form is: <form method="post" enctype="multipart/form-data" name="upload_form" id="data"> <select id="sel"> <option>Select the Project Stream</option> <option value="1">Computer Science</option> <option value="2">Mechanical</option> <option value="3">IT</option> <option value="4">Web Development</option> <option value="5">MCA</option> <option value="6">Civil</option> </select><br /> <input type="text" id="title" placeholder="Project Title"/><br /> <input type="text" id="vurl" placeholder="If You have any video about project write your video url path here" style="width:435px;"/><br /> <textarea id="prjdesc" name="prjdesc" rows="20" cols="80" style="border-style:groove;box-shadow: 10px 10px 10px 10px #888888;"placeholder="Please describe Your Project"></textarea> <label for="file">Filename:</label> <input type="file" name="file" id="file"/><br /> <button>Submit</button> </form> My js file: $("form#data").submit(function() { alert("update"); var sid=$("#sel").val(); alert(sid); var ttle = $("#title").val(); alert(ttle); var text = $("#prjdesc").val(); var vurl = $("#vurl").val(); /*var dataString = 'param='+text+'&param1='+vurl+'&param2='+ttle+'&param3='+id;*/ var formData = new FormData($(this)[0]); formData.append('param',text); formData.append('param1',vurl); formData.append('param2',ttle ); formData.append('param3',sid ); $.ajax({ type:'POST', data:formData, url:'insert.php', success:function(id) { alert(id); window.location ="another.php?id="+id; }, cache: false, contentType: false, processData: false }); return false; }); insert.php: <?php print_r($_FILES); $desc = $_POST['param']; echo $desc; $video = $_POST['param1']; echo $video ; $title = $_POST['param2']; echo $title; $tech_id=$_POST['param3']; echo $tech_id; $host="localhost"; $username="root"; $password=""; $db_name="geny"; $tbl_name="project_details"; mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $allowedExts = array("gif", "jpeg", "jpg", "png"); $extension = end(explode(".", $_FILES["file"]["name"])); $url_dir = "C:/wamp/www/WebsiteTemplate4/upload/"; if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/jpg") || ($_FILES["file"]["type"] == "image/pjpeg") || ($_FILES["file"]["type"] == "image/x-png") || ($_FILES["file"]["type"] == "image/png")) && ($_FILES["file"]["size"] < 50000) && in_array($extension, $allowedExts)) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br>"; } else { if (file_exists($url_dir . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"],$url_dir. $_FILES["file"]["name"]); // echo "Stored in: " . "upload/" . $_FILES["file"]["name"]; $tmp = "C:/wamp/www/WebsiteTemplate4/upload/" . $_FILES["file"]["name"]; $sql="INSERT INTO $tbl_name (title, content, img_path, video_url, project_tech_Id) VALUES ('$title','$desc','$tmp','$video','$tech_id')"; if(mysql_query($sql)) { echo mysql_insert_id(); } else { echo "Cannot Insert"; } } } } else { echo "Invalid file"; } ?> another.php: <?php $temp=$_GET['id']; echo $temp; $host="localhost"; $username="root"; $password=""; $db_name="geny"; $tbl_name="project_details"; mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $query = mysql_query("SELECT content FROM project_details WHERE id=". $temp); if (!$query) { echo 'Could not run query: ' . mysql_error(); exit; } $row = mysql_fetch_row($query); echo "<div id='uprjct' style='background:#336699;'> <p>$row[0]</p> </div>"; ?> but the ID what i am returning in insert.php containg array of elements...i dont want all these thing i want only ID(which is number).. please tell me what is wrong in my code...

    Read the article

  • Oracle on Windows / .NET ??(2010?12?)

    - by Yusuke.Yamamoto
    Oracle Database ? Windows Server / .NET ???????????????????????????????????????? 12?~1???????????????? Oracle on Windows / .NET ???????????????! ???????????????????? ?? ????? Windows Server / .NET ???????? Oracle Database ? Windows Server Oracle Database ? .NET Oracle on Windows / .NET ?????? ????? Windows Server / .NET ???????? Oracle=Linux / UNIX ?? ?Oracle Database ????? Linux / UNIX ?????????????????????? ???????? Windows RDBMS ?????????????????? Oracle on Windows ???No.1???!2,000???????! ????????????????????????????????????????????????? ???????????????? Windows ???????&?????????!? ?????????Windows Server ?????????UNIX ? Oracle Database ?????????????????????????(????????·??????)? ????Windows Server ???(Active Directory, MSCS, VSS, etc)????????????????????? ????????1????!Windows Server 2008??Oracle Database 11g???????? ???.NET ??????????????????????? Oracle Data Provider for .NET ????????Oracle Database ???·?????????????????????? ???????/???1????!.NET + Oracle Database 11g ???????????? Oracle Database ? Windows Server / .NET ?????????????????????????? Oracle on Windows / .NET ????????·Tips??????????????! Oracle Database ? Windows Server Windows ?????? Oracle Database 11g Release 2 ??????|????????????????????! Oracle Database 11g Release 2 ????? ???:??????|??????|???????? OTN Windows Server System Center Windows ? Oracle Database ??? " ?????????????? SQL Server ????? / SQL Server ?????? ???!?SQL Server????????????????(??) SQL Server ?? Oracle Database ????????????? ??? SQL Server ??????????????????????????????????? " ?????????????? Oracle Database ? .NET .NET ?????? Oracle Data Access Components(ODAC) ??????|????????????????????! .NET and Windows Application Development ????? ???:.NET??? OTN .NET Developer Center .NET ? Oracle Database ??????????? " ?????????????? Visual Studio ?? Oracle Database ?????????? " ?????????????? Oracle on Windows / .NET ?????? ???????????????????? ????(Oracle Direct Seminar)????????????????????????????????????????? ??????????? View RSS feed ?????

    Read the article

  • Oracle Open World starts on Sunday, Sept 30

    - by Mike Dietrich
    Oracle Open World 2012 starts on Sunday this week - and we are really looking forward to see you in one of our presentations, especially theDatabase Upgrade on SteriodsReal Speed, Real Customers, Real Secretson Monday, Oct 1, 12:15pm in Moscone South 307(just skip the lunch - the boxed food is not healthy at all): Monday, Oct 1, 12:15 PM - 1:15 PM - Moscone South - 307 Database Upgrade on Steroids:Real Speed, Real Customers, Real Secrets Mike Dietrich - Consulting Member Technical Staff, Oracle Georg Winkens - Technical Manager, Amadeus Data Processing Carol Tagliaferri - Senior Development Manager, Oracle  Looking to improve the performance of your database upgrade and learn about other ways to reduce upgrade time? Isn’t everyone? In this session, you will learn directly from Oracle’s Upgrade Development team about what you can do to speed things up. Find out about ways to reduce upgrade downtime such as using a transient logical standby database and/or Oracle GoldenGate, and get other hints and tips. Learn about new features that improve upgrade performance and reduce downtime. Hear Georg Winkens, DB Services technical manager from Amadeus, speak about his upgrade experience, and get real-life performance measurements and advice for a successful upgrade. . And don't forget: we already start on Sunday so if you'd like to learn about the SAP database upgrades at Deutsche Messe: Sunday, Sep 30, 11:15 AM - 12:00 PM - Moscone West - 2001Oracle Database Upgrade to 11g Release 2 with SAP Applications Andreas Ellerhoff - DBA, Deutsche Messe AG Mike Dietrich - Consulting Member Technical Staff, Oracle Jan Klokkers - Sr.Director SAP Development, Oracle Deutsche Messe began to use Oracle6 Database at the end of the 1980s and has been using Oracle Database technology together with SAP applications successfully since 2002. At the end of 2010, it took the first steps of an upgrade to Oracle Database 11g Release 2 (11.2), and since mid-2011, all SAP production systems there run successfully with Oracle Database 11g. This presentation explains why Deutsche Messe uses Oracle Database together with SAP applications, discusses the many reasons for the upgrade to Release 11g, and focuses on the operational top aspects from a DBA perspective. . And unfortunately the Hands-On-Lab is sold out already ... We would like to apologize but we have absolutely ZERO influence on either the number of runs or the number of available seats.  Tuesday, Oct 2, 10:15 AM - 12:45 PM - Marriott Marquis - Salon 12/13 Hands On Lab:Upgrading an Oracle Database Instance, Using Best Practices Roy Swonger - Senior Director, Software Development, Oracle Carol Tagliaferri - Senior Development Manager, Oracle Mike Dietrich - Consulting Member Technical Staff, Oracle Cindy Lim - PMTS, Oracle Carol Palmer - Principal Product Manager, Oracle This hands-on lab gives participants the opportunity to work through a database upgrade from an older release of Oracle Database to the very latest Oracle Database release available. Participants will learn how the improved automation of the upgrade process and the generation of fix-up scripts can quickly help fix database issues prior to upgrading. The lab also uses the new parallel upgrade feature to improve performance of the upgrade, resulting in less downtime. Come get inside information about database upgrades from the Database Upgrade development team. . See you soon

    Read the article

  • New channels for Exadata 11.2.3.1.1

    - by Rene Kundersma
    With the release of Exadata 11.2.3.1.0 back in April 2012 Oracle has deprecated the minimal pack for the Exadata Database Servers (compute nodes). From that release the Linux Database Server updates will be done using ULN and YUM. For the 11.2.3.1.0 release the ULN exadata_dbserver_11.2.3.1.0_x86_64_base channel was made available and Exadata operators could subscribe their system to it via linux.oracle.com. With the new 11.2.3.1.1 release two additional channels are added: a 'latest' channel (exadata_dbserver_11.2_x86_64_latest) a 'patch' channel (exadata_dbserver_11.2_x86_64_patch) The patch channel has the new or updated packages updated in 11.2.3.1.1 from the base channel. The latest channel has all the packages from 11.2.3.1.0 base and patch channels combined.  From here there are three possible situations a Database Server can be in before it can be updated to 11.2.3.1.1: Database Server is on Exadata release < 11.2.3.1.0 Database Server is patched to 11.2.3.1.0 Database Server is freshly imaged to 11.2.3.1.0 In order to bring a Database Server to 11.2.3.1.1 for all three cases the same approach for updating can be used (using YUM), but there are some minor differences: For Database Servers on a release < 11.2.3.1.0 the following high-level steps need to be performed: Subscribe to el5_x86_64_addons, ol5_x86_64_latest and  exadata_dbserver_11.2_x86_64_latest Create local repository Point Database Server to the local repository* install the update * during this process a one-time action needs to be done (details in the README) For Database Servers patched to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local repository Point Database Server to the local repository Update the system For Database Servers freshly imaged to 11.2.3.1.0: Subscribe to patch channel  exadata_dbserver_11.2_x86_64_patch Create local  repository Point Database Server to the local repository Update the system The difference between 'situation 2' (Database Server is patched to 11.2.3.1.0) and 'situation 3' (Database Server is freshly imaged to 11.2.3.1.0) is that in situation 2 the existing Exadata-computenode.repo file needs to be edited while in situation 3 this file is not existing  and needs to be created or copied. Another difference is that you will end up with more OFA packages installed in situation 2. This is because none are removed during the updating process.  The YUM update functionality with the new channels is a great enhancements to the Database Server update procedure. As usual, the updates can be done in a rolling fashion so no database service downtime is required.  For detailed and up-to-date instructions always see the patch README's 1466459.1 patch 13998727 888828.1 Rene Kundersma

    Read the article

  • Problems with DNS propagation 10 days after a change was made

    - by runlevel6
    The engineering team I work with has been in the process of moving equipment from one datacenter to another. Ten days ago we moved one of our name servers authoritative for our client's domains (ns1.faithhiway.com) and updated its IP address with its respective DNS provider (register.com) to point to the new datacenter. All tests done show that this name server is correctly running at its new location and when queried, returning the correct response for any domains it is responsible for. The problem is that well after 72 hours had gone by we were still seeing more DNS activity at its old IP address than at the new. The good news is that we kept a name server responding on the old IP address for the time being so we are not seeing any issues with the domains our nameserver is responsible for but the goal is to retire that as soon as possible. As you can see from WhatsMyDNS.net, a decent amount of propagation has occurred over the last 10 days since we made this change, but still there are some locations reporting our original IP. Considering that the TTL is only 3600 with the name servers responsible for this domain, it does not make any sense to myself or the other engineers working with me that we are having this issue. Now if I run a DNS check using one of the Register.com DNS servers (direct nameservers for faithhiway.com), I get the following (correct) result: # dig @dns01.gpn.register.com ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @dns01.gpn.register.com. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43232 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 5 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. ;; ADDITIONAL SECTION: dns01.gpn.register.com. 3600 IN A 98.124.192.1 dns02.gpn.register.com. 3600 IN A 98.124.197.1 dns03.gpn.register.com. 3600 IN A 98.124.193.1 dns04.gpn.register.com. 3600 IN A 69.64.145.225 dns05.gpn.register.com. 3600 IN A 98.124.196.1 ;; Query time: 50 msec ;; SERVER: 98.124.192.1#53(98.124.192.1) ;; WHEN: Thu Jan 27 15:16:57 2011 ;; MSG SIZE rcvd: 269 Just as a reference, here are the results when the same query is checked against a variety of Public DNS servers: Google: # dig @8.8.8.8 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @8.8.8.8. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12773 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 997 IN A 206.127.2.71 ;; Query time: 29 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Thu Jan 27 15:17:31 2011 ;; MSG SIZE rcvd: 52 Level 3: # dig @4.2.2.1 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @4.2.2.1. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46505 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 2623 IN A 206.127.2.71 ;; Query time: 7 msec ;; SERVER: 4.2.2.1#53(4.2.2.1) ;; WHEN: Thu Jan 27 15:18:35 2011 ;; MSG SIZE rcvd: 52 Verizon: # dig @151.197.0.38 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @151.197.0.38. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32658 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; Query time: 81 msec ;; SERVER: 151.197.0.38#53(151.197.0.38) ;; WHEN: Thu Jan 27 15:19:15 2011 ;; MSG SIZE rcvd: 52 Cisco: # dig @64.102.255.44 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @64.102.255.44. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39689 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 3601 IN A 206.127.2.71 ;; AUTHORITY SECTION: faithhiway.com. 3600 IN NS dns01.gpn.register.com. faithhiway.com. 3600 IN NS dns04.gpn.register.com. faithhiway.com. 3600 IN NS dns05.gpn.register.com. faithhiway.com. 3600 IN NS dns02.gpn.register.com. faithhiway.com. 3600 IN NS dns03.gpn.register.com. ;; Query time: 105 msec ;; SERVER: 64.102.255.44#53(64.102.255.44) ;; WHEN: Thu Jan 27 15:20:05 2011 ;; MSG SIZE rcvd: 165 OpenDNS: # dig @208.67.222.222 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @208.67.222.222. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12328 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169507 IN A 207.200.19.162 ;; Query time: 6 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Thu Jan 27 15:19:29 2011 ;; MSG SIZE rcvd: 52 SpeakEasy: # dig @66.93.87.2 ns1.faithhiway.com A ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_5.3 <<>> @66.93.87.2. ns1.faithhiway.com A ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9342 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.faithhiway.com. IN A ;; ANSWER SECTION: ns1.faithhiway.com. 169323 IN A 207.200.19.162 ;; Query time: 69 msec ;; SERVER: 66.93.87.2#53(66.93.87.2) ;; WHEN: Thu Jan 27 15:19:51 2011 ;; MSG SIZE rcvd: 52 As you can see above, the majority of queries are returning the correct result. But a few (OpenDNS and SpeakEasy in the examples above) are still showing the old IP address. Considering the length of time that has gone by, it seems obvious to me that either we have made a mistake and not thoroughly handled the DNS changes on our end (likely) or there is a problem with either the DNS provider for this domain (Register) or with some of the DNS servers out in the wild (rather unlikely). Any advice on how I can proceed with this? UPDATE (January 31, 2011): First of all, I apologize for the length of both the original question and this update. I contemplated removing some of the excess from the original post but just in case this problem and its solution are helpful to someone else in the future I'm just going to leave everything as it is. Anyway, I've been doing some more research into this problem, and have discovered the following interesting occurrence. While running a check on the glue records for faithhiway.com always resolve correctly, if I go and check a client domain (where ns1.faithhiway.com is authoritative), I get a strange response. It looks like the root servers are returning nsX.faithhiway.com as their old IP addresses still (under Additional Section). Because we have a server still there responding to DNS queries, the trace finishes and returns the correct IP addresses as the final step (again, under Additional Section). The example below uses one of the domains that we use that uses ns1.faithhiway.com as its authoritative DNS server. # dig +trace +nosearch +all +norecurse ignitemail.com ; <<>> DiG 9.2.4 <<>> +trace +nosearch +all +norecurse ignitemail.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46856 ;; flags: qr ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;. IN NS ;; ANSWER SECTION: . 7986 IN NS a.root-servers.net. . 7986 IN NS b.root-servers.net. . 7986 IN NS c.root-servers.net. . 7986 IN NS d.root-servers.net. . 7986 IN NS e.root-servers.net. . 7986 IN NS f.root-servers.net. . 7986 IN NS g.root-servers.net. . 7986 IN NS h.root-servers.net. . 7986 IN NS i.root-servers.net. . 7986 IN NS j.root-servers.net. . 7986 IN NS k.root-servers.net. . 7986 IN NS l.root-servers.net. . 7986 IN NS m.root-servers.net. ;; Query time: 39 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 228 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16325 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. com. 172800 IN NS j.gtld-servers.net. ;; ADDITIONAL SECTION: a.gtld-servers.net. 172800 IN A 192.5.6.30 a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 b.gtld-servers.net. 172800 IN A 192.33.14.30 b.gtld-servers.net. 172800 IN AAAA 2001:503:231d::2:30 c.gtld-servers.net. 172800 IN A 192.26.92.30 d.gtld-servers.net. 172800 IN A 192.31.80.30 e.gtld-servers.net. 172800 IN A 192.12.94.30 f.gtld-servers.net. 172800 IN A 192.35.51.30 g.gtld-servers.net. 172800 IN A 192.42.93.30 h.gtld-servers.net. 172800 IN A 192.54.112.30 i.gtld-servers.net. 172800 IN A 192.43.172.30 j.gtld-servers.net. 172800 IN A 192.48.79.30 k.gtld-servers.net. 172800 IN A 192.52.178.30 l.gtld-servers.net. 172800 IN A 192.41.162.30 ;; Query time: 64 msec ;; SERVER: 198.41.0.4#53(a.root-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 504 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12860 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; AUTHORITY SECTION: ignitemail.com. 172800 IN NS ns1.faithhiway.com. ignitemail.com. 172800 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 172800 IN A 207.200.19.162 ns2.faithhiway.com. 172800 IN A 207.200.50.142 ;; Query time: 152 msec ;; SERVER: 192.54.112.30#53(h.gtld-servers.net) ;; WHEN: Mon Jan 31 09:22:17 2011 ;; MSG SIZE rcvd: 111 ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43016 ;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;ignitemail.com. IN A ;; ANSWER SECTION: ignitemail.com. 3600 IN A 206.127.2.64 ;; AUTHORITY SECTION: ignitemail.com. 3600 IN NS ns1.faithhiway.com. ignitemail.com. 3600 IN NS ns2.faithhiway.com. ;; ADDITIONAL SECTION: ns1.faithhiway.com. 3600 IN A 206.127.2.71 ns2.faithhiway.com. 3600 IN A 206.127.2.72 ;; Query time: 25 msec ;; SERVER: 206.127.2.71#53(ns1.faithhiway.com) ;; WHEN: Mon Jan 31 09:22:18 2011 ;; MSG SIZE rcvd: 127 I really think this is a problem we have somewhere in our setup, but whether it is ignorance of something with DNS on my or my fellow engineer's end or just a dumb mistake we made, I have yet to find it.

    Read the article

  • Assertion failure when trying to write (INSERT, UPDATE) to sqlite database on iPhone.

    - by Mark McFarlane
    I have a really frustrating error that I've spent hours looking at and cannot fix. I can get data from my db no problem with this code, but inserting or updating gives me these errors: *** Assertion failure in +[Functions db_insert_answer:question_type:score:], /Volumes/Xcode/Kanji/Classes/Functions.m:129 *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Error inserting: db_insert_answer:question_type:score:' Here is the code I'm using: [Functions db_insert_answer:[[dict_question objectForKey:@"JISDec"] intValue] question_type:@"kanji_meaning" score:arc4random() % 100]; //update EF, Next_question, n here [Functions db_update_EF:[dict_question objectForKey:@"question"] EF:EF]; To call these functions: +(sqlite3_stmt *)db_query:(NSString *)queryText{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; NSLog(queryText); if (sqlite3_prepare_v2(database, [queryText UTF8String], -1, &statement, nil) == SQLITE_OK) { } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } sqlite3_close(database); return statement; } +(void)db_insert_answer:(int)obj_id question_type:(NSString *)question_type score:(int)score{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; char *errorMsg; char *update = "INSERT INTO Answers (obj_id, question_type, score, date) VALUES (?, ?, ?, DATE())"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, obj_id); sqlite3_bind_text(statement, 2, [question_type UTF8String], -1, NULL); sqlite3_bind_int(statement, 3, score); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error inserting: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Answer saved"); } +(void)db_update_EF:(NSString *)kanji EF:(int)EF{ sqlite3 *database = [self get_db]; sqlite3_stmt *statement; //NSLog(queryText); char *errorMsg; char *update = "UPDATE Kanji SET EF = ? WHERE Kanji = '?'"; if (sqlite3_prepare_v2(database, update, -1, &statement, nil) == SQLITE_OK) { sqlite3_bind_int(statement, 1, EF); sqlite3_bind_text(statement, 2, [kanji UTF8String], -1, NULL); } else { NSLog(@"HMM, COULDNT RUN QUERY: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(statement) != SQLITE_DONE){ NSAssert1(0, @"Error updating: %s", errorMsg); } sqlite3_finalize(statement); sqlite3_close(database); NSLog(@"Update saved"); } +(sqlite3 *)get_db{ sqlite3 *database; NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *copyFrom = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"/kanji_training.sqlite"]; if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB FILE ALREADY EXISTS"); } else { [fileManager copyItemAtPath:copyFrom toPath:[self dataFilePath] error:nil]; NSLog(@"COPIED DB TO DOCUMENTS BECAUSE IT DIDNT EXIST: NEW INSTALL"); } if (sqlite3_open([[self dataFilePath] UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); NSLog(@"FAILED TO OPEN DB"); } else { if([fileManager fileExistsAtPath:[self dataFilePath]]) { //NSLog(@"DB PATH:"); //NSLog([self dataFilePath]); } } return database; } + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:@"kanji_training.sqlite"]; } I really can't work it out! Can anyone help me? Many thanks.

    Read the article

  • Solving Slow Query

    - by Chris
    We are installing a new forum (yaf) for our site. One of the stored procedures is extremely slow - in fact it always times out in the browser. If I run it in MSSMS it takes nearly 10 minutes to complete. Is there a way to find out what part of this query if taking so long? The Query: DECLARE @BoardID int DECLARE @UserID int DECLARE @CategoryID int = null DECLARE @ParentID int = null SET @BoardID = 1 SET @UserID = 2 select a.CategoryID, Category = a.Name, ForumID = b.ForumID, Forum = b.Name, Description, Topics = [dbo].[yaf_forum_topics](b.ForumID), Posts = [dbo].[yaf_forum_posts](b.ForumID), Subforums = [dbo].[yaf_forum_subforums](b.ForumID, @UserID), LastPosted = t.LastPosted, LastMessageID = t.LastMessageID, LastUserID = t.LastUserID, LastUser = IsNull(t.LastUserName,(select Name from [dbo].[yaf_User] x where x.UserID=t.LastUserID)), LastTopicID = t.TopicID, LastTopicName = t.Topic, b.Flags, Viewing = (select count(1) from [dbo].[yaf_Active] x JOIN [dbo].[yaf_User] usr ON x.UserID = usr.UserID where x.ForumID=b.ForumID AND usr.IsActiveExcluded = 0), b.RemoteURL, x.ReadAccess from [dbo].[yaf_Category] a join [dbo].[yaf_Forum] b on b.CategoryID=a.CategoryID join [dbo].[yaf_vaccess] x on x.ForumID=b.ForumID left outer join [dbo].[yaf_Topic] t ON t.TopicID = [dbo].[yaf_forum_lasttopic](b.ForumID,@UserID,b.LastTopicID,b.LastPosted) where a.BoardID = @BoardID and ((b.Flags & 2)=0 or x.ReadAccess<>0) and (@CategoryID is null or a.CategoryID=@CategoryID) and ((@ParentID is null and b.ParentID is null) or b.ParentID=@ParentID) and x.UserID = @UserID order by a.SortOrder, b.SortOrder IO Statistics: Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Active'. Scan count 14, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_User'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Topic'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Category'. Scan count 0, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Forum'. Scan count 0, logical reads 488, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserGroup'. Scan count 231, logical reads 693, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_ForumAccess'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_AccessMask'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserForum'. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Client Statistics: Client Execution Time 11:54:01 Query Profile Statistics Number of INSERT, DELETE and UPDATE statements 0 0.0000 Rows affected by INSERT, DELETE, or UPDATE statements 0 0.0000 Number of SELECT statements 8 8.0000 Rows returned by SELECT statements 19 19.0000 Number of transactions 0 0.0000 Network Statistics Number of server roundtrips 3 3.0000 TDS packets sent from client 3 3.0000 TDS packets received from server 34 34.0000 Bytes sent from client 3166 3166.0000 Bytes received from server 128802 128802.0000 Time Statistics Client processing time 156478 156478.0000 Total execution time 572009 572009.0000 Wait time on server replies 415531 415531.0000 Execution Plan

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • Unable to execute native sql query

    - by Renjith
    I am developing an application with Spring and hibernate. In the DAO class, I was trying to execute a native sql as follows: SELECT * FROM product ORDER BY unitprice ASC LIMIT 6 OFFSET 0 But the system throws an exception. org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:544) com.dao.ProductDAO.listProducts(ProductDAO.java:15) com.dataobjects.impl.ProductDoImpl.listProducts(ProductDoImpl.java:26) com.action.ProductAction.showProducts(ProductAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) application-context.xml is show below <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/jdbc.properties" /> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}" /> <!-- Hibernate SessionFactory --> <!-- class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"--> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref local="dataSource"/> </property> <property name="configLocation"> <value>WEB-INF/classes/hibernate.cfg.xml</value> </property> <property name="configurationClass"> <value>org.hibernate.cfg.AnnotationConfiguration</value> </property> <!-- <property name="annotatedClasses"> <list> <value>com.pojo.Product</value> <value>com.pojo.User</value> <value>com.pojo.UserLogin</value> </list> </property> --> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <!-- User Bean definitions --> <bean name="/logincheck" class="com.action.LoginAction"> <property name="userDo" ref="userDo" /> </bean> <bean id="userDo" class="com.dataobjects.impl.UserDoImpl" > <property name="userDAO" ref="userDAO" /> </bean> <bean id="userDAO" class="com.dao.UserDAO" > <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean name="/listproducts" class="com.action.ProductAction"> <property name="productDo" ref="productDo" /> </bean> <bean id="productDo" class="com.dataobjects.impl.ProductDoImpl" > <property name="productDAO" ref="productDAO" /> </bean> <bean id="productDAO" class="com.dao.ProductDAO" > <property name="sessionFactory" ref="sessionFactory" /> </bean> And DAO class is public class ProductDAO extends HibernateDaoSupport{ public List listProducts(int startIndex, int incrementor) { org.hibernate.Session session = getHibernateTemplate().getSessionFactory().getCurrentSession(); String queryString = "SELECT * FROM product ORDER BY unitprice ASC LIMIT 6 OFFSET 0"; List list = null; try{ session.beginTransaction(); org.hibernate.Query query = session.createQuery(queryString); list = query.list(); session.getTransaction().commit(); } catch(Exception e) { e.printStackTrace(); } finally { session.close(); } return list; } public List getProductCount() { String queryString = "SELECT COUNT(*) FROM Product"; return getHibernateTemplate().find(queryString); } } Any thoughts to fix it up?

    Read the article

  • Magento - Data is not inserted into database, but the id is autoincremented

    - by Joseph
    I am working on a new payment module for Magento and have come across an issue that I cannot explain. The following code that runs after the credit card is verified: $table_prefix = Mage::getConfig()->getTablePrefix(); $tableName = $table_prefix.'authorizecim_magento_id_link'; $resource = Mage::getSingleton('core/resource'); $writeconnection = $resource->getConnection('core_write'); $acPI = $this->_an_customerProfileId; $acAI = $this->_an_customerAddressId; $acPPI = $this->_an_customerPaymentProfileId; $sql = "insert into {$tableName} values ('', '$customerId', '$acPI', '$acPI', '3')"; $writeconnection->query($sql); $sql = "insert into {$tableName} (magCID, anCID, anOID, anObjectType) values ('$customerId', '$acPI', '$acAI', '2')"; $writeconnection->query($sql); $sql = "insert into {$tableName} (magCID, anCID, anOID, anObjectType) values ('$customerId', '$acPI', '$acPPI', '1')"; $writeconnection->query($sql); I have verified using Firebug and FirePHP that the SQL queries are syntactically correct and no errors are returned. The odd thing here is that I have checked the database, and the autoincrement value is incremented on every run of the code. However, no rows are inserted in the database. I have verified this by adding a die(); statement directly after the first write. Any ideas why this would be occuring? The relative portion of the config.xml is this: <config> <global> <models> <authorizecim> <class>CPAP_AuthorizeCim_Model</class> </authorizecim> <authorizecim_mysql4> <class>CPAP_AuthorizeCim_Model_Mysql4</class> <entities> <anlink> <table>authorizecim_magento_id_link</table> </anlink> </entities> <entities> <antypes> <table>authorizecim_magento_types</table> </antypes> </entities> </authorizecim_mysql4> </models> <resources> <authorizecim_setup> <setup> <module>CPAP_AuthorizeCim</module> <class>CPAP_AuthorizeCim_Model_Resource_Mysql4_Setup</class> </setup> <connection> <use>core_setup</use> </connection> </authorizecim_setup> <authorizecim_write> <connection> <use>core_write</use> </connection> </authorizecim_write> <authorizecim_read> <connection> <use>core_read</use> </connection> </authorizecim_read> </resources> </global> </config>

    Read the article

  • Query a recordset

    - by Dion
    I'm trying to work out how to move the $sql_pay_det query outside of the $sql_pay loop (so in effect query on $rs_pay_det rather than creating a new $rs_pay_det for each iteration of the loop) At the moment my code looks like: //Get payroll transactions $sql_pay = 'SELECT Field1, DescriptionandURLlink, Field5, Amount, Field4, PeriodName FROM tblgltransactionspayroll WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND PeriodNo = "'.$month.'"'; $rs_pay = mysql_query($sql_pay); $row_pay = mysql_fetch_assoc($rs_pay); while($row_pay = mysql_fetch_array($rs_pay)) { $employee_name = $row_pay['Field1']; $assign_no = $row_pay['DescriptionandURLlink']; $pay_period = $row_pay['Field5']; $mth_name = $row_pay['PeriodName']; $amount = $row_pay['Amount']; $total_amount = $total_amount + $amount; $amount = my_number_format($amount, 2, ".", ","); $sql_pay_det = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale FROM tblpayrolldetail WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND AccountingPeriod = "'.$mth_name.'" AND EmployeeRef = "'.$assign_no.'"'; $rs_pay_det = mysql_query($sql_pay_det); $row_pay_det = mysql_fetch_assoc($rs_pay_det); while($row_pay_det = mysql_fetch_array($rs_pay_det)) { $element_det = $row_pay_det['ElementDesc']; $amount_det = $row_pay_det['Amount']; $wte_worked = $row_pay_det['WTEWorked']; $wte_paid = $row_pay_det['WTEPaid']; $wte_cont = $row_pay_det['WTEContract']; $payscale = $row_pay_det['Payscale']; //Get band/point and annual salary where element is basic pay if ($element_det =="3959#Basic Pay"){ $sql_payscale = 'SELECT txtPayscaleName, Salary FROM tblpayscalemapping WHERE txtPayscale = "'.$payscale.'"'; $rs_payscale = mysql_query($sql_payscale); $row_payscale = mysql_fetch_assoc($rs_payscale); $grade = $row_payscale['txtPayscaleName']; $salary = "£" . my_number_format($row_payscale['Salary'], 0, ".", ","); } } } I've tried doing this: //Get payroll transactions $sql_pay = 'SELECT Field1, DescriptionandURLlink, Field5, Amount, Field4, PeriodName FROM tblgltransactionspayroll WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND PeriodNo = "'.$month.'"'; $rs_pay = mysql_query($sql_pay); $row_pay = mysql_fetch_assoc($rs_pay); //Get payroll detail recordset $sql_pay_det = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale, EmployeeRef FROM tblpayrolldetail WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND AccountingPeriod = "'.$mth_name.'"'; $rs_pay_det = mysql_query($sql_pay_det); while($row_pay = mysql_fetch_array($rs_pay)) { $employee_name = $row_pay['Field1']; $assign_no = $row_pay['DescriptionandURLlink']; $pay_period = $row_pay['Field5']; $mth_name = $row_pay['PeriodName']; $amount = $row_pay['Amount']; $total_amount = $total_amount + $amount; $amount = my_number_format($amount, 2, ".", ","); //Query $rs_pay_det $sql_pay_det2 = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale FROM "'.$rs_pay_det.'" WHERE EmployeeRef = "'.$assign_no.'"'; $rs_pay_det2 = mysql_query($sql_pay_det2); $row_pay_det2 = mysql_fetch_assoc($rs_pay_det2); while($row_pay_det = mysql_fetch_array($rs_pay_det)) { $element_det = $row_pay_det['ElementDesc']; $amount_det = $row_pay_det['Amount']; $wte_worked = $row_pay_det['WTEWorked']; $wte_paid = $row_pay_det['WTEPaid']; $wte_cont = $row_pay_det['WTEContract']; $payscale = $row_pay_det['Payscale']; //Get band/point and annual salary where element is basic pay if ($element_det =="3959#Basic Pay"){ $sql_payscale = 'SELECT txtPayscaleName, Salary FROM tblpayscalemapping WHERE txtPayscale = "'.$payscale.'"'; $rs_payscale = mysql_query($sql_payscale); $row_payscale = mysql_fetch_assoc($rs_payscale); $grade = $row_payscale['txtPayscaleName']; $salary = "£" . my_number_format($row_payscale['Salary'], 0, ".", ","); } } } But I get an error on $row_pay_det2 saying that "mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource"

    Read the article

  • get me the latest Change from Select Query in below given condition

    - by OM The Eternity
    I have a Table structure as id, trackid, table_name, operation, oldvalue, newvalue, field, changedonetime Now if I have 3 rows for the same "trackid" same "field", then how can i select the latest out of the three? i.e. for e.g.: id = 100 trackid = 152 table_name = jos_menu operation= UPDATE oldvalue = IPL newvalue = IPLcccc field = name live = 0 changedonetime = 2010-04-30 17:54:39 and id = 101 trackid = 152 table_name = jos_menu operation= UPDATE oldvalue = IPLcccc newvalue = IPL2222 field = name live = 0 changedonetime = 2010-04-30 18:54:39 As u can see above the secind entry is the latest change, Now what query I should use to get the only one and Latest row out of many such rows... $distupdqry = "select DISTINCT trackid,table_name from jos_audittrail where live = 0 AND operation = 'UPDATE'"; $disupdsel = mysql_query($distupdqry); $t_ids = array(); $t_table = array(); while($row3 = mysql_fetch_array($disupdsel)) { $t_ids[] = $row3['trackid']; $t_table[] = $row3['table_name']; //$t_table[] = $row3['table_name']; } //echo "<pre>";print_r($t_table);echo "<pre>"; //exit; for($n=0;$n<count($t_ids);$n++) { $qupd = "SELECT * FROM jos_audittrail WHERE operation = 'UPDATE' AND trackid=$t_ids[$n] order by changedone DESC "; $seletupdaudit = mysql_query($qupd); $row4 = array(); $audit3 = array(); while($row4 = mysql_fetch_array($seletupdaudit)) { $audit3[] = $row4; } $updatefield = ''; for($j=0;$j<count($audit3);$j++) { if($j == 0) { if($audit3[$j]['operation'] == "UPDATE") { //$insqry .= $audit2[$i]['operation']." "; //echo "<br>"; $updatefield .= "UPDATE `".$audit3[$j]['table_name']."` SET "; } } if($audit3[$j]['operation'] == "UPDATE") { $updatefield .= $audit3[$j]['field']." = '".$audit3[$j]['newvalue']."', "; } } /*echo "<pre>"; print_r($audit3); exit;*/ $primarykey = "SHOW INDEXES FROM `".$t_table[$n]."` WHERE Key_name = 'PRIMARY'"; $prime = mysql_query($primarykey); $pkey = mysql_fetch_array($prime); $updatefield .= "]"; echo $updatefield = str_replace(", ]"," WHERE ".$pkey['Column_name']." = '".$t_ids[$n]."'",$updatefield); } In the above code I am fetching ou the distinct IDs in which update operation has been done, and then accordingly query is fired to get all the changes done on different fields of the selected distinct ids... Here I am creating the Update query by fetching the records from the initially described table which is here mentioned as audittrail table... Therefore I need the last made change in the field so that only latest change can be selected in the select queries i have used... please go through the code.. and see how can i make the required change i need finally..

    Read the article

  • How to use value from primary accessdatasource control as parameter in select query for secondary ac

    - by weedave
    Hi, I'm trying to display all orders placed and I have a primary accessdatasource control that has a select query to get the customer information and the orderID. I want to use the orderID value from this first query as a parameter for the secondary accessdatasource control that selects the product information of the products in the order. In plain english, I want to:- select product info from product table where orderID = ? (where ? is the orderID value from the first query) I have tried the <%#Eval("OrderID")% but I get a "server tag not well formed" error, but I do get results returned when I just type the order ID in, but obviously every result (order) just contains the same product info... <asp:Repeater ID="Repeater1" runat="server" DataSourceID="AccessDataSource1"> <ItemTemplate> <asp:AccessDataSource ID="AccessDataSource2" runat="server" DataFile="~/App_Data/project.mdb" SelectCommand="SELECT orderDetails.OrderID, album.Artist, album.Album, album.Cost, album.ImageURL, orderDetails.Quantity, orderDetails.Total FROM (album INNER JOIN orderDetails ON album.AlbumID = orderDetails.AlbumID) WHERE (orderDetails.OrderID = ? )"> <SelectParameters> // Error is on this line <asp:Parameter Name="OrderID" DefaultValue="<%#Eval ("OrderID")%>" /> </SelectParameters> </asp:AccessDataSource> <div class="viewAllOrdersOrderArea"> <div class="viewAllOrdersOrderSummary"> <p><b>Order ID: </b><%#Eval("OrderID")%></p> <h4>Shipping Details</h4> <p><b>Shipping Address: </b><%#Eval("ShippingName")%>, <%#Eval("ShippingAddress")%>, <%#Eval("ShippingTown")%>, <%#Eval("ShippingPostcode")%></p> <h4>Payment Details</h4> <p><b>Cardholder's Address: </b><%#Eval("CardHolder")%>, <%#Eval("BillingAddress")%>, <%#Eval("BillingTown")%>, <%#Eval("BillingPostcode")%></p> <p><b>Payment Method: </b><%#Eval("CardType")%></p> <p><b>Card Number: </b><%#Eval("CardNumber")%></p> <p><b>Start Date: </b><%#Eval("StartDate")%>, Expiry Date: <%#Eval("ExpiryDate")%></p> <p><b>Security Digits: </b><%#Eval("SecurityDigits")%></p> <h4>Ordered items:</h4> <asp:Repeater ID="Repeater2" runat="server" DataSourceID="AccessDataSource2"> <ItemTemplate> <div style="display: block; float: left;"> <div class="viewAllOrdersProductImage"> <img width="70px" height="70px" alt="<%# Eval("Artist") %> - <%# Eval("Album") %>" src="assets/images/thumbs/<%# Eval("ImageURL") %>" /> </div> <div style="display:block; float:left; padding-top:15px; padding-right:20px;"><p><b><%# Eval("Artist") %> - <%# Eval("Album") %></b></p> <p>£<%# Eval("Cost") %> x <%# Eval("Quantity") %> = £<%#Eval("Total")%></p></div> </div> </ItemTemplate> </asp:Repeater> </div> </div> </ItemTemplate> </asp:Repeater>

    Read the article

  • Unix ? Linux ????????? Oracle Database 11g Release 2 ? SAP ????????

    - by ?? ?
    US?Blog Oracle Database 11g Release 2 is SAP certified for Unix and Linux platforms. ?????????SAP??????Oracle Database 11g R2????????? ????UNIX???Linux???????????????? Linux x86???x86-64 AIX HP-UX IA64 Solaris SPARC???x64 ??? ?????????????????????????! Advanced Compression Option (table, RMAN backup, expdp, DG Network) Real Application Testing Oracle Database 11g Release 2 Database Vault Oracle Database 11g Release 2 RAC Advanced Encryption for tablespaces, RMAN backups, expdp, DG Network Direct NFS Deferred Segments Online Patching ????SAP???1398634 ??????????????????

    Read the article

  • Microsoft 2003 DNS sometimes cant query for some A pointers when their TTL expires

    - by Bq
    Warning Long question :) We have a win 2003 server with a DNS server, every now and then it cant provide us with some A pointers for a specific domain. I have a small script running which asks for SOA,NS and A records for the domain in question and sometimes when the TTL expires the DNS fails to get the A records again, a Clear Cache fixes the problem.. Have a look Here it worked when the TTL expired Thu Apr 29 15:24:20 METDST 2010 dig basefarm.net soa basefarm.net. 64908 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 64908 IN NS ns01.sth.basefarm.net. basefarm.net. 64908 IN NS ns01.osl.basefarm.net. ns01.sth.basefarm.net. 299 IN A 80.76.149.76 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig ns01.sth.basefarm.net a ns01.sth.basefarm.net. 299 IN A 80.76.149.76 The TTL expired for ns01.sth.basefarm.net and ns01.osl.basefarm.net but the DNS managed to get the new values (TTL 3600) Thu Apr 29 15:29:20 METDST 2010 dig basefarm.net soa basefarm.net. 64608 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 3600 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 64608 IN NS ns01.sth.basefarm.net. basefarm.net. 64608 IN NS ns01.osl.basefarm.net. ns01.sth.basefarm.net. 3600 IN A 80.76.149.76 ns01.osl.basefarm.net. 3600 IN A 81.93.160.4 dig ns01.sth.basefarm.net a ns01.sth.basefarm.net. 3600 IN A 80.76.149.76 But then another time it fails, and we need to clear the dns cache for it to start working again... Thu Apr 29 17:24:23 METDST 2010 dig basefarm.net soa basefarm.net. 57705 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 57705 IN NS ns01.sth.basefarm.net. basefarm.net. 57705 IN NS ns01.osl.basefarm.net. ns01.sth.basefarm.net. 299 IN A 80.76.149.76 ns01.osl.basefarm.net. 299 IN A 81.93.160.4 dig ns01.sth.basefarm.net a ns01.sth.basefarm.net. 299 IN A 80.76.149.76 The TTL expires but the DNS cant get the ip addresses for ns01.sth.basefarm.net and ns01.osl.basefarm.net Thu Apr 29 17:29:23 METDST 2010 dig basefarm.net soa basefarm.net. 57405 IN SOA ns01.osl.basefarm.net. hostmaster.basefarm.net. 2010042613 86400 3600 2419200 600 ns01.osl.basefarm.net. 3600 IN A 81.93.160.4 dig basefarm.net ns basefarm.net. 57405 IN NS ns01.sth.basefarm.net. basefarm.net. 57405 IN NS ns01.osl.basefarm.net. dig ns01.sth.basefarm.net a Lookup failed I'm really lost on this one and have tried asking Google but to no avail..

    Read the article

  • How to organize SQL script files

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too long. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • Expose DB2 data as XML / Query DB2via XML

    - by Anthony Gatlin
    I have a client who has a sort of data warehouse stored in DB2. For a variety of reasons, the data must remain on this platform. The client is considering implementing an open-source CMS (Drupal) which runs in MySQL. The client needs to be able to execute a bunch of pre-defined queries against the DB2 database from the remote application. Drupal appears to interact well with XML data from other systems. It was suggested that we use something like XML-RPC to execute the queries against DB2. I am very familiar with SQL Server and pretty familiar with MySQL, but I have no experience with DB2 and no understanding of its capabilities or limitations. Is there any way that we can use something like XML, XML-RPC, or even http to initiate queries against a DB2 database? Any ideas are appreciated! Thank you!

    Read the article

  • Performance analytics via DBMS "plugins", or other solution

    - by Polynomial
    I'm working on a systems monitoring product that currently focuses on performance at the system level. We're expanding out to monitoring database systems. Right now we can fetch simple performance information from a selection of DBMS, like connection count, disk IO rates, lock wait times, etc. However, we'd really like a way to measure the execution time of every query going into a DBMS, without requiring the client to implement monitoring in their application code. Some potential solutions might be: Some sort of proxy that sits between client and server. SSL might be an issue here, plus it requires us to reverse engineer and implement the network protocol for each DBMS. Plugin for each DBMS system that automatically records performance information when a query comes in. Other problems include "anonymising" the SQL, i.e. taking something like SELECT * FROM products WHERE price > 20 AND name LIKE "%disk%" and producing SELECT * FROM products WHERE price > ? AND name LIKE "%?%", though this shouldn't be too difficult with some clever parsing and regex. We're mainly focusing on: MySQL MSSQL Oracle Redis mongodb memcached Are there any plugin-style mechanisms we can utilise for any of these? Or is there a simpler solution?

    Read the article

  • Sharepoint bug database - any good?

    - by jkohlhepp
    We've been using Sharepoint as a poor man's bug tracking database for the last couple of projects that we did. No one is really happy with the solution so I'm looking for alternatives. I happened to stumble upon the Bug Database Template for Sharepoint. If it is halfway decent it might be a good choice for us since the transition would be smooth as the team is already used to Sharepoint. Anyone have any experience using this template? Any major problems? Any major missing features? Is there any documentation out there beyond that download page? Thanks for the help.

    Read the article

  • Windows based Apache/PHP/Perl/database stacks

    - by GreenMatt
    Does anyone know if XAMPP's Perl also provides Tk support? Does anyone know of a similar stack which provides PostresSQL in addition to/instead of MySQL? I'd like this to run on Win 7 and XP. Edit: I do know I could install the other components I want "by hand", for example getting Tk from CPAN, I'd just prefer to get everything at one time if possible.

    Read the article

  • SQLite vs Firebird

    - by rwallace
    The scenario I'm looking at is "This program uses Postgres. Oh, you want to just use it single-user for the moment, and put off having to deal with installing a database server? Okay, in the meantime you can use it with the embedded single-user database." The question is then which embedded database is best. As I understand it, the two main contenders are SQLite and Firebird; so which is better? Criteria: Full SQL support, or as close as reasonably possible. Full text search. Easy to call from C# Locks, or allows you to lock, the database file to make sure nobody tries to run it multiuser and ends up six months down the road with intermittent data corruption in all their backups. Last but far from least, reliability. As I understand it, the disadvantages of SQLite are, No right outer join. Workaround: use left outer join instead. Not much integrity checking. Workaround: be really careful in the application code. No decimal numbers. Workaround: lots of aspirin. None of the above are showstoppers. Are there any others I'm missing? (I know it doesn't support some administrative and code-within-database SQL features, that aren't relevant for this kind of use case.) I don't know anything much about Firebird. What are its disadvantages?

    Read the article

  • Complex SQL Query similar to a z order problem

    - by AaronLS
    I have a complex SQL problem in MS SQL Server, and in drawing on a piece of paper I realized that I could think of it as a single bar filled with rectangles, each rectangle having segments with different Z orders. In reality it has nothing to do with z order or graphics at all, but more to do with some complex business rules that would be difficult to explain. Howoever, if anyone has ideas on how to solve the below that will give me my solution. I have the following data: ObjectID, PercentOfBar, ZOrder (where smaller is closer) A, 100, 6 B, 50, 5 B, 50, 4 C, 30, 3 C, 70, 6 The result of my query that I want is this, in any order: PercentOfBar, ZOrder 50, 5 20, 4 30, 3 Think of it like this, if I drew rectangle A, it would fill 100% of the bar and have a z order of 6. 66666666666 AAAAAAAAAAA If I then layed out rectangle B, consisting of two segments, both segments would cover up rectangle A resulting in the following rendering: 4444455555 BBBBBBBBBB As a rule of thumb, for a given rectangle, it's segments should be layed out such that the highest z order is to the right of the lower Z orders. Finally rectangle C would cover up only portions of Rectangle B with it's 30% segment that is z order 3, which would be on the left. You can hopefully see how the is represented in the output dataset I listed above: 3334455555 CCCBBBBBBB Now to make things more complicated I actually have a 4th column such that this grouping occurs for each key: Input: SomeKey, ObjectID, PercentOfBar, ZOrder (where smaller is closer) X, A, 100, 6 X, B, 50, 5 X, B, 50, 4 X, C, 30, 3 X, C, 70, 6 Y, A, 100, 6 Z, B, 50, 2 Z, B, 50, 6 Z, C, 100, 5 Output: SomeKey, PercentOfBar, ZOrder X, 50, 5 X, 20, 4 X, 30, 3 Y, 100, 6 Z, 50, 2 Z, 50, 5 Notice in the output, the PercentOfBar for each SomeKey would add up to 100%. This is one I know I'm going to be thinking about when I go to bed tonight. Just to be explicit and have a question: What would be a query that would produce the results described above?

    Read the article

  • StackOverflow in VB.NET SQLite query

    - by Majgel
    I have an StackOverflowException in one of my DB functions that I don't know how to deal with. I have a SQLite database with one table "tblEmployees" that holds records for each employees (several thousand posts) and usually this function runs without any problem. But sometimes after the the function is called a thousand times it breaks with an StackOverflowException at the line "ReturnTable.Load(reader)" with the message: An unhandled exception of type 'System.StackOverflowException' occurred in System.Data.SQLite.dll If I restart the application it has no problem to continue with the exact same post it last crashed on. I can also make the exactly same DB-call from SQLite Admin at the crash time without no problems. Here is the code: Public Function GetNextEmployeeInQueue() As String Dim NextEmployeeInQueue As String = Nothing Dim query As [String] = "SELECT FirstName FROM tblEmployees WHERE Checked=0 LIMIT 1;" Try Dim ReturnTable As New DataTable() Dim mycommand As New SQLiteCommand(cnn) mycommand.CommandText = query Dim reader As SQLiteDataReader = mycommand.ExecuteReader() ReturnTable.Load(reader) reader.Close() If ReturnTable.Rows.Count > 0 Then NextEmployeeInQueue = ReturnTable.Rows(0)("FirstName").ToString() Else MsgBox("No more employees found in queue") End If Catch fail As Exception MessageBox.Show("Error: " & fail.Message.ToString()) End Try If NextEmployeeInQueue IsNot Nothing Then Return NextEmployeeInQueue Else Return "No more records in queue" End If End Function When crashes, the reader has "Property evaluation failed." in all values. I assume there is some problem with allocated memory that isn't released correctly, but can't figure out what object it's all about. The DB-connection opens when the DB-class object is created and closes on main form Dispose. Should I maybe dispose the mycommand object every time in a finally block? But wouldn't that result in a closed DB-connection?

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >