Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 589/1156 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Complex MySQL table select/join with pre-condition

    - by Howard
    Hello, I have the schema below CREATE TABLE `vocabulary` ( `vid` int(10) unsigned NOT NULL auto_increment, `name` varchar(255), PRIMARY KEY vid (`vid`) ); CREATE TABLE `term` ( `tid` int(10) unsigned NOT NULL auto_increment, `vid` int(10) unsigned NOT NULL default '0', `name` varchar(255), PRIMARY KEY tid (`tid`) ); CREATE TABLE `article` ( `aid` int(10) unsigned NOT NULL auto_increment, `body` text, PRIMARY KEY aid (`aid`) ); CREATE TABLE `article_index` ( `nid` int(10) unsigned NOT NULL default '0', `tid` int(10) unsigned NOT NULL default '0' ) INSERT INTO `vocabulary` values (1, 'vocabulary 1'); INSERT INTO `vocabulary` values (2, 'vocabulary 2'); INSERT INTO `term` values (1, 1, 'term v1 t1'); INSERT INTO `term` values (2, 1, 'term v1 t2 '); INSERT INTO `term` values (3, 2, 'term v2 t3'); INSERT INTO `term` values (4, 2, 'term v2 t4'); INSERT INTO `term` values (5, 2, 'term v2 t5'); INSERT INTO `article` values (1, ""); INSERT INTO `article` values (2, ""); INSERT INTO `article` values (3, ""); INSERT INTO `article` values (4, ""); INSERT INTO `article` values (5, ""); INSERT INTO `article_index` values (1, 1); INSERT INTO `article_index` values (1, 3); INSERT INTO `article_index` values (2, 2); INSERT INTO `article_index` values (3, 1); INSERT INTO `article_index` values (3, 3); INSERT INTO `article_index` values (4, 3); INSERT INTO `article_index` values (5, 3); INSERT INTO `article_index` values (5, 4); Example. Select term of a defiend vocabulary (with non-zero article index), e.g. vid=2 select a.tid, count(*) as article_count from term t JOIN article_index a ON t.tid = a.tid where t.vid = 2 group by t.tid; +-----+---------------+ | tid | article_count | +-----+---------------+ | 3 | 4 | | 4 | 1 | +-----+------------ Question: Select terms a. of a defiend vocabulary (with non-zero article index, e.g. vid=1 = term {1,2}) b. given that those terms are linked with articles which are linked with terms under vid=2, e.g. = {1}, term with tid=2 is excluded since no linkage to terms under vid=2 SQL: Any idea? Expected result: +-----+---------------+ | tid | article_count | +-----+---------------+ | 1 | 2 | +-----+---------------+

    Read the article

  • How to write php code to input jsonstring and insert to sql server

    - by Romi
    i am trying to OUTPUT a Json String from the phone and to get it uploaded to the sql server i have. I Do not know how to get the output Json and write the php code... i tried many methods but couldnt find a solution. public void post(String string) { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost( "http://www.hopscriber.com/xoxoxox/testphp.php"); try { List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("myJson", string)); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); String str = inputStreamToString(response.getEntity().getContent()) .toString(); Log.w("SENCIDE", str); } catch (Exception e) { Toast.makeText(getBaseContext(), "notwork", Toast.LENGTH_LONG) .show(); } } private Object inputStreamToString(InputStream is) { // TODO Auto-generated method stub String line = ""; StringBuilder total = new StringBuilder(); // Wrap a BufferedReader around the InputStream BufferedReader rd = new BufferedReader(new InputStreamReader(is)); // Read response until the end try { while ((line = rd.readLine()) != null) { total.append(line); } } catch (IOException e) { e.printStackTrace(); } // Return full string return total; } it outputs a json string as [myJson=[{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"org.siislab.tutorial.permission.READ_FRIENDS","level":"Normal"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"org.siislab.tutorial.permission.WRITE_FRIENDS","level":"Normal"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"org.siislab.tutorial.permission.FRIEND_SERVICE","level":"Normal"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"org.siislab.tutorial.permission.FRIEND_NEAR","level":"Dangerous"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"org.siislab.tutorial.permission.BROADCAST_FRIEND_NEAR","level":"Normal"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"android.permission.RECEIVE_BOOT_COMPLETED","level":"Normal"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"android.permission.READ_CONTACTS","level":"Dangerous"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"android.permission.ACCESS_FINE_LOCATION","level":"Dangerous"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"android.permission.WRITE_EXTERNAL_STORAGE","level":"Dangerous"},{"name":"FriendTracker","user":"amjgp000000000000000","pack":"org.siislab.tutorial.friendtracker","perm":"android.permission.READ_PHONE_STATE","level":"Dangerous"},{"name":"Tesing","user":"amjgp000000000000000","pack":"com.example.tesing","perm":"null","level":"null"},{"name":"Action Bar","user":"amjgp000000000000000","pack":"name.brucephillips.actionbarexample","perm":"null","level":"null"},.......

    Read the article

  • Different data for different dates

    - by nsw1475
    I am making a web page which will allow users to input and view data for different dates, and to change the date, there are two buttons, one of which will display the previous day, and one which will show the next day. I know that I can do this by submitting forms and reloading the page every time they press one of these buttons, but I would rather use javascript and not have to submit a form, but I am having troubles getting it to work. Currently, I have the two buttons, and the date stored in a PHP variable, as shown in my code below: <script> function init() { <? $nutrDate = $this->parseDate(date('m/d/Y')); ?> } function nutrPrevDay() { <? $nutrDate = mktime(0, 0, 0, date('m',$nutrDate), date('d',$nutrDate)-1, date('Y',$nutrDate)); ?> alert("<? echo(date("m/d/y", $nutrDate)) ?>"); } function nutrNextDay() { <? $nutrDate = mktime(0, 0, 0, date('m',$nutrDate), date('d',$nutrDate)+1, date('Y',$nutrDate)); ?> } window.onload = init; </script> <p style="text-align:center; font-size:14px; color:#03F"> <button onclick="nutrPrevDay()" style="width:200px" >< Show Previous Day</button> <? echo(date('m/d/Y', $nutrDate)) ?> <button onclick="nutrNextDay()" style="width:200px">Show Next Day ></button> </p> I have the alert in the nutrPrevDay() only as debugging. What happens is when I click on the button, the alert shows that the day is correctly decreased (for example from May 17 to May 16), but only decreases one day, and not one day for every click. Also, I do not know how to make the text on the page (created by the line ) change to display the new date after a button is clicked. So here are my questions: 1) Is it possible to dynamically change data (such as text, and in the future, SQL queries) on a page using javascript without having to reload a page when clicking on a button? 2) If possible, how can I make those changes? 3) How can I fix this so that it will increment and decrement through dates every time a button is clicked?

    Read the article

  • Database advantages? Access, MySQL, msSQL, or any others?

    - by JimZ
    Dear all Stackoverflowers, I just started to learn programming and now I'm putting this question online based on a quote: no question is silly My work needs to develop a order system based on web, which wants a database system. Since using Excel for years as a general office user, I naturally turn this to Access. However, most people say Access is very limited comparing to MySQL or MSSQL, or any other more professional database system. But after developing some functions for my company's order system, I really find Access can fulfill my request. And I also tried MSSQL to develop, which I found it not quite convenient to use. I have searched in stackoverflow and find no general answer about my doubt. Now I am sincerely hoping some experienced and professional developers could clear my doubts. Now I'm listing some Access advantages, which I don't think other database system have. I hope you could help me also find these advantages in others. 1. Access is portable, I can just copy a xxx.accdb file to my company and continue with development. 2. Access is easy to generate helpful table, for example, it will automatically generate a field that can automatically count, could be used as primary key value. 3. it is more compatable with Excel, to display and filter data. 4. importantly, it nerely needs no environment to setup, just needs MS Office to be installed. ............others However, I also find some points that MSSQL is advantaged: 1. security reasons 2. easy to backup, ( just use BACKUP..... sql statement to do it) 3. can edit stored procedure to save some functions to database ...............others specifically, I wish some friends could tell me how to make other database portable? since I usually work both at home and in office. It's a headache to move MSSQL work to my office, since the version of MSSQL is not the same. Thank you all and best regards, :)

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • "No such file or directory"?

    - by user1509541
    Ok, so I have a VDS laying around, and I thought I would turn it into a TF2 game server. When I connect to my server through PuTTY, and use wget to download the package "hldsupdatetool.bin" from Steampowered.com. I go to run it and it says "No such file or directory found". When I use "ls" to see what files are in directory, it lists "hldsupdatetool.bin" as being in the directory. So, why is it saying it's not there? This has been a headache for the past 2 days. It's returning: root@10004:~# wget http://www.steampowered.com/download/hldsupdatetool.bin --2012-07-08 06:04:49-- http://www.steampowered.com/download/hldsupdatetool.bin Resolving www.steampowered.com... 208.64.202.68 Connecting to www.steampowered.com|208.64.202.68|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3513408 (3.4M) [application/octet-stream] Saving to: “hldsupdatetool.bin.3” 100%[======================================>] 3,513,408 2.45M/s in 1.4s 2012-07-08 06:04:51 (2.45 MB/s) - “hldsupdatetool.bin.3” saved [3513408/3513408] root@10004:~# chmod +x hldsupdatetool.bin.3 root@10004:~# ./hldsupdatetool.bin.3 -bash: ./hldsupdatetool.bin.3: No such file or directory root@10004:~# More: root@10004:~# ls ffmpeg-packages hldsupdatetool.bin.1 hldsupdatetool.bin.3 hldsupdatetool.bin hldsupdatetool.bin.2 setup.sh root@10004:~# ls -la total 13828 drwx------ 4 root root 4096 Jul 8 06:04 . drwxr-xr-x 21 root root 4096 Jul 8 05:57 .. -rw------- 1 root root 8799 Jul 8 06:26 .bash_history -rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc -rw-r--r-- 1 root root 4 Jul 2 19:39 .custombuild drwxr-xr-x 2 root root 4096 Jul 4 18:49 ffmpeg-packages ---x--xrwx 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.1 -rw-r--r-- 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.2 -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.3 -rw-r--r-- 1 root root 140 Nov 19 2007 .profile -rw------- 1 root root 1024 Jul 2 19:49 .rnd -rwxr-xr-x 1 root root 38866 May 23 22:02 setup.sh drwxr-xr-x 2 root root 4096 Jul 2 19:44 .ssh root@10004:~#

    Read the article

  • Setting up Rails to work with sqlserver

    - by FortunateDuke
    Ok I followed the steps for setting up ruby and rails on my Vista machine and I am having a problem connecting to the database. Contents of database.yml development: adapter: sqlserver database: APPS_SETUP Host: WindowsVT06\SQLEXPRESS Username: se Password: paswd Run rake db:migrate from myapp directory ---------- rake aborted! no such file to load -- deprecated ADO I have dbi 0.4.0 installed and have created the ADO folder in C:\Ruby\lib\ruby\site_ruby\1.8\DBD\ADO I got the ado.rb from the dbi 0.2.2 What else should I be looking at to fix the issue connecting to the database? Please don't tell me to use MySql or Sqlite or Postgres. *UPDATE* I have installed the activerecord-sqlserver-adapter gem from --source=http://gems.rubyonrails.org Still not working. I have verified that I can connect to the database by logging into SQL Management Studio with the credentials. rake db:migrate --trace PS C:\Inetpub\wwwroot\myapp> rake db:migrate --trace (in C:/Inetpub/wwwroot/myapp) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! no such file to load -- deprecated C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require' C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants_in' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/site_ruby/1.8/dbi.rb:48 C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `gem_original_require' C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:27:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:355:in `new_constants_in' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/dependencies.rb:510:in `require' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/requires.rb:7:in `require_library_ or_gem' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnin gs' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.1.1/lib/active_support/core_ext/kernel/requires.rb:5:in `require_library_ or_gem' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-sqlserver-adapter-1.0.0.9250/lib/active_record/connection_adapters/sqlserver _adapter.rb:29:in `sqlserver_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:292:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:292:in `connection=' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:260:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/connection_specificatio n.rb:78:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:408:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:373:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:373:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/migration.rb:356:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.1.1/lib/tasks/databases.rake:99 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:621:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:621:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:616:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:616:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:582:in `invoke_with_call_chain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:575:in `invoke_with_call_chain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:568:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2031:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2009:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2048:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2003:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:1982:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:2048:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/lib/rake.rb:1979:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.2/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 PS C:\Inetpub\wwwroot\myapp>

    Read the article

  • SMO ConnectionContext.StatementTimeout setting is ignored

    - by Woody
    I am successfully using Powershell with SMO to backup most databases. However, I have several large databases in which I receive a "timeout" error "System.Data.SqlClient.SqlException: Timeout expired". The timout consistently occurs at 10 minutes. I have tried setting ConnectionContext.StatementTimeout to 0, 6000, and to [System.Int32]::MaxValue. The setting made no difference. I have found a number of Google references which indicate setting it to 0 makes it unlimited. No matter what I try, the timeouts consistently occur at 10 minutes. I even set Remote Query Timeout on the server to 0 (via Studio Manager) to no avail. Below is my SMO connection where I set the time out and the actual backup function. Further below is the output from my script. UPDATE Interestingly enough, I wrote the backup function in C# using VS 2008 and the timeout override does work within that environment. I am in the process of incorporating that C# process into my Powershell Script until I can find out why the timeout override does not work with just Powershell. This is extremely annoying! function New-SMOconnection { Param ($server, $ApplicationName= "PowerShell SMO", [int]$StatementTimeout = 0 ) # Write-Debug "Function: New-SMOconnection $server $connectionname $commandtimeout" if (test-path variable:\conn) { $conn.connectioncontext.disconnect() } else { $conn = New-Object('Microsoft.SqlServer.Management.Smo.Server') $server } $conn.connectioncontext.applicationName = $applicationName $conn.ConnectionContext.StatementTimeout = $StatementTimeout $conn.connectioncontext.Connect() $conn } $smo = New-SMOConnection -server $server if ($smo.connectioncontext.isopen -eq $false) { Throw "Could not connect to server $($server)." } Function Backup-Database { Param([string]$dbname) $db = $smo.Databases.get_Item($dbname) if (!$db) {"Database $dbname was not found"; Return} $sqldir = $smo.Settings.BackupDirectory + "\$($smo.name -replace ("\\", "$"))" $s = ($server.Split('\'))[0] $basedir = "\\$s\" + $($sqldir -replace (":", "$")) $dt = get-date -format yyyyMMdd-HHmmss $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') $dbbk.Action = 'Database' $dbbk.BackupSetDescription = "Full backup of " + $dbname $dbbk.BackupSetName = $dbname + " Backup" $dbbk.Database = $dbname $dbbk.MediaDescription = "Disk" $target = "$basedir\$dbname\FULL" if (-not(Test-Path $target)) { New-Item $target -ItemType directory | Out-Null} $device = "$sqldir\$dbname\FULL\" + $($server -replace("\\", "$")) + "_" + $dbname + "_FULL_" + $dt + ".bak" $dbbk.Devices.AddDevice($device, 'File') $dbbk.Initialize = $True $dbbk.Incremental = $false $dbbk.LogTruncation = [Microsoft.SqlServer.Management.Smo.BackupTruncateLogType]::Truncate If (!$copyonly) { If ($kill) {$smo.KillAllProcesses($dbname)} $dbbk.SqlBackupAsync($server) } $dbbk } Started SQL backups for server LCFSQLxxx\SQLxxx at 05/06/2010 15:33:16 Statement TimeOut value set to 0. DatabaseName : OperationsManagerDW StartBackupTime : 5/6/2010 3:33:16 PM EndBackupTime : 5/6/2010 3:43:17 PM StartCopyTime : 1/1/0001 12:00:00 AM EndCopyTime : 1/1/0001 12:00:00 AM CopiedFiles : Status : Failed ErrorMessage : System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The backup or restore was aborted. 10 percent processed. 20 percent processed. 30 percent processed. 40 percent processed. 50 percent processed. 60 percent processed. 70 percent processed. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Ended backups at 05/06/2010 15:43:23

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • NHibernate won't persist DateTime SqlDateTime overflow

    - by chris raethke
    I am working on an ASP.NET MVC project with NHibernate as the backend and am having some trouble getting some dates to write back to my SQL Server database tables. These date fields are NOT nullable, so the many answers here about how to setup nullable datetimes have not helped. Basically when I try to save the entity which has a DateAdded and a LastUpdated fields, I am getting a SqlDateTime overflow exception. I have had a similar problem in the past where I was trying to write a datetime field into a smalldatetime column, updating the type on the column appeared to fix the problem. My gut feeling is that its going to be some problem with the table definition or some type of incompatible data types, and the overflow exception is a bit of a bum steer. I have attached an example of the table definition and the query that NHibernate is trying to run, any help or suggestions would be greatly appreciated. CREATE TABLE [dbo].[CustomPages]( [ID] [uniqueidentifier] NOT NULL, [StoreID] [uniqueidentifier] NOT NULL, [DateAdded] [datetime] NOT NULL, [AddedByID] [uniqueidentifier] NOT NULL, [LastUpdated] [datetime] NOT NULL, [LastUpdatedByID] [uniqueidentifier] NOT NULL, [Title] [nvarchar](150) NOT NULL, [Term] [nvarchar](150) NOT NULL, [Content] [ntext] NULL ) exec sp_executesql N'INSERT INTO CustomPages (Title, Term, Content, LastUpdated, DateAdded, StoreID, LastUpdatedById, AddedById, ID) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7, @p8)',N'@p0 nvarchar(21),@p1 nvarchar(21),@p2 nvarchar(33),@p3 datetime,@p4 datetime,@p5 uniqueidentifier,@p6 uniqueidentifier,@p7 uniqueidentifier,@p8 uniqueidentifier',@p0=N'Size and Colour Chart',@p1=N'size-and-colour-chart',@p2=N'This is the size and colour chart',@p3=''2009-03-14 14:29:37:000'',@p4=''2009-03-14 14:29:37:000'',@p5='48315F9F-0E00-4654-A2C0-62FB466E529D',@p6='1480221A-605A-4D72-B0E5-E1FE72C5D43C',@p7='1480221A-605A-4D72-B0E5-E1FE72C5D43C',@p8='1E421F9E-9A00-49CF-9180-DCD22FCE7F55' In response the the answers/comments, I am using Fluent NHibernate and the generated mapping is below public CustomPageMap() { WithTable("CustomPages"); Id( x => x.ID, "ID" ) .WithUnsavedValue(Guid.Empty) . GeneratedBy.Guid(); References(x => x.Store, "StoreID"); Map(x => x.DateAdded, "DateAdded"); References(x => x.AddedBy, "AddedById"); Map(x => x.LastUpdated, "LastUpdated"); References(x => x.LastUpdatedBy, "LastUpdatedById"); Map(x => x.Title, "Title"); Map(x => x.Term, "Term"); Map(x => x.Content, "Content"); } <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-lazy="false" assembly="MyNamespace.Core" namespace="MyNamespace.Core"> <class name="CustomPage" table="CustomPages" xmlns="urn:nhibernate-mapping-2.2"> <id name="ID" column="ID" type="Guid" unsaved-value="00000000-0000-0000-0000-000000000000"><generator class="guid" /></id> <property name="Title" column="Title" length="100" type="String"><column name="Title" /></property> <property name="Term" column="Term" length="100" type="String"><column name="Term" /></property> <property name="Content" column="Content" length="100" type="String"><column name="Content" /></property> <property name="LastUpdated" column="LastUpdated" type="DateTime"><column name="LastUpdated" /></property> <property name="DateAdded" column="DateAdded" type="DateTime"><column name="DateAdded" /></property> <many-to-one name="Store" column="StoreID" /><many-to-one name="LastUpdatedBy" column="LastUpdatedById" /> <many-to-one name="AddedBy" column="AddedById" /></class></hibernate-mapping>

    Read the article

  • LINQ Generic Query with inherited base class?

    - by sah302
    I am trying to write some generic LINQ queries for my entities, but am having issue doing the more complex things. Right now I am using an EntityDao class that has all my generics and each of my object class Daos (such as Accomplishments Dao) inherit it, am example: using LCFVB.ObjectsNS; using LCFVB.EntityNS; namespace AccomplishmentNS { public class AccomplishmentDao : EntityDao<Accomplishment>{} } Now my entityDao has the following code: using LCFVB.ObjectsNS; using LCFVB.LinqDataContextNS; namespace EntityNS { public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue) { ImplementationType entity = null; if (getProperty != null && getValue != null) { //Nhibernate Example: //ImplementationType entity = default(ImplementationType); //entity = Me.session.CreateCriteria(Of ImplementationType)().Add(Expression.Eq(getProperty, getValue)).UniqueResult(Of InterfaceType)() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>(); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); } return entity; } public bool insertRow(ImplementationType entity) { if (entity != null) { //Nhibernate Example: //Me.session.Save(entity, entity.Id) //Me.session.Flush() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>().InsertOnSubmit(entity); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); return true; } else { return false; } } } }             I have gotten the insertRow function working, however I am not even sure how to go about doing getOnebyValueOfProperty, the closest thing I could find on this site was: http://stackoverflow.com/questions/2157560/generic-linq-to-sql-query How can I pass in the column name and the value I am checking against generically using my current set-up? It seems like from that link it's impossible since using a where predicate because entity class doesn't know what any of the properties are until I pass them in. Lastly, I need some way of setting a new object as the return type set to the implementation type, in nhibernate (what I am trying to convert from) it was simply this line that did it: ImplentationType entity = default(ImplentationType); However default is an nhibernate command, how would I do this for LINQ? EDIT: getOne doesn't seem to work even when just going off the base class (this is a partial class of the auto generated LINQ classes). I even removed the generics. I tried: namespace ObjectsNS { public partial class Accomplishment { public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) { Accomplishment entity = new Accomplishment(); if (singleOrDefaultClause != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.Accomplishments.SingleOrDefault(singleOrDefaultClause); lcfdatacontext.Dispose(); } return entity; } } } Get the following error: Error 1 Overload resolution failed because no accessible 'SingleOrDefault' can be called with these arguments: Extension method 'Public Function SingleOrDefault(predicate As System.Linq.Expressions.Expression(Of System.Func(Of Accomplishment, Boolean))) As Accomplishment' defined in 'System.Linq.Queryable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))'. Extension method 'Public Function SingleOrDefault(predicate As System.Func(Of Accomplishment, Boolean)) As Accomplishment' defined in 'System.Linq.Enumerable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean)'. 14 LCF Okay no problem I changed: public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) to: public Accomplishment getOneByWhereClause(Expression<Func<Accomplishment, bool>> singleOrDefaultClause) Error goes away. Alright, but now when I try to call the method via: Accomplishment accomplishment = new Accomplishment(); var result = accomplishment.getOneByWhereClause(x=>x.Id = 4) It doesn't work it says x is not declared. I also tried getOne, and various other Expression =(

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • Data won't save to SQL database, getting error "close() was never explicitly called on database"

    - by SnowLeppard
    I have a save button in the activity where the user enters data which does this: String subjectName = etName.getText().toString(); String subjectColour = etColour.getText().toString(); SQLDatabase entrySubject = new SQLDatabase(AddSubject.this); entrySubject.open(); entrySubject.createSubjectEntry(subjectName, subjectColour); entrySubject.close(); Which refers to this SQL database class: package com.***.schooltimetable; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; public class SQLDatabase { public static final String KEY_SUBJECTS_ROWID = "_id"; public static final String KEY_SUBJECTNAME = "name"; public static final String KEY_COLOUR = "colour"; private static final String DATABASE_NAME = "Database"; private static final String DATABASE_TABLE_SUBJECTS = "tSubjects"; private static final int DATABASE_VERSION = 1; private DbHelper ourHelper; private final Context ourContext; private SQLiteDatabase ourDatabase; private static class DbHelper extends SQLiteOpenHelper { public DbHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); // TODO Auto-generated constructor stub } @Override public void onCreate(SQLiteDatabase db) { // TODO Auto-generated method stub db.execSQL("CREATE TABLE " + DATABASE_TABLE_SUBJECTS + " (" + KEY_SUBJECTS_ROWID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + KEY_SUBJECTNAME + " TEXT NOT NULL, " + KEY_COLOUR + " TEXT NOT NULL);"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub db.execSQL("DROP TABLE IF EXISTS " + DATABASE_TABLE_SUBJECTS); onCreate(db); } } public SQLDatabase(Context c) { ourContext = c; } public SQLDatabase open() throws SQLException { ourHelper = new DbHelper(ourContext); ourDatabase = ourHelper.getWritableDatabase(); return this; } public void close() { ourHelper.close(); } public long createSubjectEntry(String subjectName, String subjectColour) { // TODO Auto-generated method stub ContentValues cv = new ContentValues(); cv.put(KEY_SUBJECTNAME, subjectName); cv.put(KEY_COLOUR, subjectColour); return ourDatabase.insert(DATABASE_TABLE_SUBJECTS, null, cv); } public String[][] getSubjects() { // TODO Auto-generated method stub String[] Columns = new String[] { KEY_SUBJECTNAME, KEY_COLOUR }; Cursor c = ourDatabase.query(DATABASE_TABLE_SUBJECTS, Columns, null, null, null, null, null); String[][] Result = new String[1][]; // int iRow = c.getColumnIndex(KEY_LESSONS_ROWID); int iName = c.getColumnIndex(KEY_SUBJECTNAME); int iColour = c.getColumnIndex(KEY_COLOUR); for (c.moveToFirst(); !c.isAfterLast(); c.moveToNext()) { Result[0][c.getPosition()] = c.getString(iName); Result[1][c.getPosition()] = c.getString(iColour); Settings.subjectCount = c.getPosition(); TimetableEntry.subjectCount = c.getPosition(); } return Result; } This class has other variables and other variations of the same methods for multiple tables in the database, i've cut out the irrelevant ones. I'm not sure what I need to close and where, I've got the entrySubject.close() in my activity. I used the methods for the database from the NewBoston tutorials. Can anyone see what I've done wrong, or where my problem is? Thanks.

    Read the article

  • "Row not found or changed" Problem

    - by winston schröder
    Hi there, I'm working on a SQL CE Database and get into the "Row nor found or changed" exception. The exception only occurs when I try to update. On the first Run after the insert it shows up a MemberChangeConflict which says, that my Column Created_at has in all three values (current, original, database) the same. But in a second attempt it doesn't appear anymore. The DataContext is instanciated on Startup and freed on Exit of my Local(!) Application. I use a sqlmetal generated mapping and code file. In the map I added some Associations and set the timemstamp columns UpdateCheck property to Always while all other have the setting never. The Timestamp Column is marked as isVersion="true", the Id Column as Primary Key. Since I don't dispose the datacontext I expected to be using implicit transaction. When I run the SubmitChanges Method within a TransactionScope. Can anyone tell me how I can update the timestamp within the code ? I know about the Problems one has to deal with if you dispose the datacontext. So I decided not to do this since I use a Single User Local DB Cache File. (I did already use a version where I disposed the datacontext after every usage, but this version had a real bad performance and error rate, so I decided to choose the other variant.) LibDB.Client.Vehicles tmp = null; try { tmp = e.Parameter as LibDB.Client.Vehicles; if (tmp == null) return; if (!this._dc.Vehicles.Contains(tmp)) { this._dc.Vehicles.Attach(tmp); } this.ShowChangesReport(this._dc.GetChangeSet()); using (TransactionScope ts = new TransactionScope()) { try { this._dc.SubmitChanges(); ts.Complete(); } catch (ChangeConflictException cce) { Console.WriteLine("Optimistic concurrency error."); Console.WriteLine(cce.Message); Console.ReadLine(); foreach (ObjectChangeConflict occ in this._dc.ChangeConflicts) { MetaTable metatable = this._dc.Mapping.GetTable(occ.Object.GetType()); LibDB.Client.Vehicles entityInConflict = (LibDB.Client.Vehicles)occ.Object; Console.WriteLine("Table name: {0}", metatable.TableName); Console.Write("Vin: "); Console.WriteLine(entityInConflict.Vin); foreach (MemberChangeConflict mcc in occ.MemberConflicts) { object currVal = mcc.CurrentValue; object origVal = mcc.OriginalValue; object databaseVal = mcc.DatabaseValue; MemberInfo mi = mcc.Member; Console.WriteLine("Member: {0}", mi.Name); Console.WriteLine("current value: {0}", currVal); Console.WriteLine("original value: {0}", origVal); Console.WriteLine("database value: {0}", databaseVal); } throw cce; } } catch (Exception ex) { this.ShowChangeConflicts(this._dc.ChangeConflicts); Console.WriteLine(ex.Message); } } this.ShowChangesReport(this._dc.GetChangeSet());

    Read the article

  • Compress file to bytes for uploading to SQL Server

    - by Chris
    I am trying to zip files to an SQL Server database table. I can't ensure that the user of the tool has write priveledges on the source file folder so I want to load the file into memory, compress it to an array of bytes and insert it into my database. This below does not work. class ZipFileToSql { public event MessageHandler Message; protected virtual void OnMessage(string msg) { if (Message != null) { MessageHandlerEventArgs args = new MessageHandlerEventArgs(); args.Message = msg; Message(this, args); } } private int sourceFileId; private SqlConnection Conn; private string PathToFile; private bool isExecuting; public bool IsExecuting { get { return isExecuting; } } public int SourceFileId { get { return sourceFileId; } } public ZipFileToSql(string pathToFile, SqlConnection conn) { isExecuting = false; PathToFile = pathToFile; Conn = conn; } public void Execute() { isExecuting = true; byte[] data; byte[] cmpData; //create temp zip file OnMessage("Reading file to memory"); FileStream fs = File.OpenRead(PathToFile); data = new byte[fs.Length]; ReadWholeArray(fs, data); OnMessage("Zipping file to memory"); MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(data, 0, data.Length); cmpData = new byte[ms.Length]; ReadWholeArray(ms, cmpData); OnMessage("Saving file to database"); using (SqlCommand cmd = Conn.CreateCommand()) { cmd.CommandText = @"MergeFileUploads"; cmd.CommandType = CommandType.StoredProcedure; //cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = data; cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = cmpData; SqlParameter p = new SqlParameter(); p.ParameterName = "@SourceFileId"; p.Direction = ParameterDirection.Output; p.SqlDbType = SqlDbType.Int; cmd.Parameters.Add(p); cmd.ExecuteNonQuery(); sourceFileId = (int)p.Value; } OnMessage("File Saved"); isExecuting = false; } private void ReadWholeArray(Stream stream, byte[] data) { int offset = 0; int remaining = data.Length; float Step = data.Length / 100; float NextStep = data.Length - Step; while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; if (remaining < NextStep) { NextStep -= Step; } } } }

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • MVC Persist Collection ViewModel (Update, Delete, Insert)

    - by Riccardo Bassilichi
    In order to create a more elegant solution I'm curios to know your suggestion about a solution to persist a collection. I've a collection stored on DB. This collection go to a webpage in a viewmodel. When the go back from the webpage to the controller I need to persist the modified collection to the same DB. The simple solution is to delete the stored collection and recreate all rows. I need a more elegant solution to mix the collections and delete not present record, update similar records ad insert new rows. this is my Models and ViewModels. public class CustomerModel { public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual IList<PreferredAirportModel> PreferedAirports { get; set; } } public class AirportModel { public virtual string Id { get; set; } public virtual string AirportName { get; set; } } public class PreferredAirportModel { public virtual AirportModel Airport { get; set; } public virtual int CheckInMinutes { get; set; } } // ViewModels public class CustomerViewModel { [Required] public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual IList<PreferredAirporViewtModel> PreferedAirports { get; set; } } public class PreferredAirporViewtModel { [Required] public virtual string AirportId { get; set; } [Required] public virtual int CheckInMinutes { get; set; } } And this is the controller with not elegant solution. public class CustomerController { public ActionResult Save(string id, CustomerViewModel viewModel) { var session = SessionFactory.CurrentSession; var customer = session.Query<CustomerModel>().SingleOrDefault(el => el.Id == id); customer.Name = viewModel.Name; // How cai I Merge collections handling delete, update and inserts ? var modifiedPreferedAirports = new List<PreferredAirportModel>(); var modifiedPreferedAirportsVm = new List<PreferredAirporViewtModel>(); // Update every common Airport foreach (var airport in viewModel.PreferedAirports) { foreach (var custPa in customer.PreferedAirports) { if (custPa.Airport.Id == airport.AirportId) { modifiedPreferedAirports.Add(custPa); modifiedPreferedAirportsVm.Add(airport); custPa.CheckInMinutes = airport.CheckInMinutes; } } } // Remove common airports from ViewModel modifiedPreferedAirportsVm.ForEach(el => viewModel.PreferedAirports.Remove(el)); // Remove deleted airports from model var toDelete = customer.PreferedAirports.Except(modifiedPreferedAirports); toDelete.ForEach(el => customer.PreferedAirports.Remove(el)); // Add new Airports var toAdd = viewModel.PreferedAirports.Select(el => new PreferredAirportModel { Airport = session.Query<AirportModel>(). SingleOrDefault(a => a.Id == el.AirportId), CheckInMinutes = el.CheckInMinutes }); toAdd.ForEach(el => customer.PreferedAirports.Add(el)); session.Save(customer); return View(); } } My environment is ASP.NET MVC 4, nHibernate, Automapper, SQL Server. Thank You!!

    Read the article

  • Is asp.net caching my sql results?

    - by Christian W
    I have the following method in an App_Code/Globals.cs file: public static XmlDataSource getXmlSourceFromOrgid(int orgid) { XmlDataSource xds = new XmlDataSource(); var ctx = new SensusDataContext(); SqlConnection c = new SqlConnection(ctx.Connection.ConnectionString); c.Open(); SqlCommand cmd = new SqlCommand(String.Format("select orgid, tekst, dbo.GetOrgTreeXML({0}) as Subtree from tblOrg where OrgID = {0}", orgid), c); var rdr = cmd.ExecuteReader(); rdr.Read(); StringBuilder sb = new StringBuilder(); sb.AppendFormat("&lt;node orgid=\"{0}\" tekst=\"{1}\"&gt;",rdr.GetInt32(0),rdr.GetString(1)); sb.Append(rdr.GetString(2)); sb.Append("&lt;/node&gt;"); xds.Data = sb.ToString(); xds.ID = "treedata"; rdr.Close(); c.Close(); return xds; } This gives me an XML-structure to use with the asp.net treeview control (I also use the CssFriendly extender to get nicer code) My problem is that if I logon on my pc with a code that gives me access on a lower level in the tree hierarchy (it's an orgianization hierarchy), it somehow "remembers" what level i logon at. So when my coworker tests from her computer with another code, giving access to another place in the tree, she get's the same tree as me. (The tree is supposed to show your own level and down.) I have added a html-comment to show what orgid it passes to the function, and the orgid passed is correct. So either the treeview caches something serverside, or the sqlquery caches it's result somehow... Any ideas? Sql function: ALTER function [dbo].[GetOrgTreeXML](@orgid int) returns XML begin RETURN (select org.orgid as '@orgid', org.tekst as '@tekst', [dbo].GetOrgTreeXML(org.orgid) from tblOrg org where (@orgid is null and Eier is null) or Eier=@orgid for XML PATH('NODE'), TYPE) end Extra code as requested: int orgid = int.Parse(Session["org"].ToString()); string orgname = context.Orgs.Where(q => q.OrgID == orgid).First().Tekst; debuglit.Text = String.Format("<!-- Id: {0} \n name: {1} -->", orgid, orgname); var orgxml = Globals.getXmlSourceFromOrgid(orgid); tvNavtree.DataSource = orgxml; tvNavtree.DataBind(); Where "debuglit" is a asp:Literal in the aspx file. EDIT: I have narrowed it down. All functions returns correct values. It just doesn't bind to it. I suspect the CssFriendly adapter to have something to do with it. I disabled the CssFriendly adapter and the problem persists... Stepping through it in debug it's correct all the way, with the stepper standing on "tvNavtree.DataBind();" I can hover the pointer over the tvNavtree.Datasource and see that it actually has the correct data. So something must be faulting in the binding process...

    Read the article

  • java.sql.SQLException: No suitable driver found for jdbc:db2:

    - by Celia
    Im using hibernate to connect to my DB2 database. I got java.sql.SQLException: No suitable driver found for jdbc:db2://ldild4268.mycompany.com:55000/myDB. I have db2jcc.jar, db2jcc_javax.jar, db2jcc_license_cu.jar, db2policy.jar, db2ggjava.jar and db2umplugin.jar added into my Java Build Path. I am able to connect to my database through SQuirrel. database.properties: jdbc.driverClassName=com.ibm.db2.jcc.DB2Driver jdbc.url=jdbc:db2://ldild4268.mycompany.com:55000/myDB jdbc.username=uname jdbc.password=pwd datasource.xml: <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="location"> <value>/WEB-INF/database.properties</value> </property> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${jdbc.driverClassName}" /> <property name="url" value="${jdbc.url}" /> <property name="username" value="${jdbc.username}" /> <property name="password" value="${jdbc.password}" /> </bean> hibernate.xml: <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.DB2Dialect</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> <property name="mappingResources"> <list> <value>/myModel.hbm.xml</value> </list> </property> </bean> myModel.hbm.xml: <hibernate-mapping> <class name="com.myCompany.model.myModel" table="table1" catalog=""> <composite-id> <key-property name="key1" column="key1" length="10"/> <key-property name="key2" column="key2" length="19"/> </composite-id> <property name="name" type="string"> <column name="Name" length="50"/> </property> </class> </hibernate-mapping> myModelDaoImpl: @Repository("myModelDao") public class myModelDaoImpl extends PortfolioHibernateDaoSupport implements myModelDao{ private SessionFactory sessionFactory; public List<Date> getKey1() { return this.sessionFactory.getCurrentSession() .createQuery("select pn.key1 from com.myCompany.model.myModel pn") .list(); } public String getPs() { String query = "select pn.name from com.myCompany.model.myModel pn where pn.key1='2011-09-30' and pn.key2=1049764"; List list = getHibernateTemplate().find(query); } } also, the method getKey1 throws nullPointer exception. How can I use createquery instead of hibernateTemplate? Thanks in advance!

    Read the article

  • Unable to execute native sql query

    - by Renjith
    I am developing an application with Spring and hibernate. In the DAO class, I was trying to execute a native sql as follows: SELECT * FROM product ORDER BY unitprice ASC LIMIT 6 OFFSET 0 But the system throws an exception. org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:544) com.dao.ProductDAO.listProducts(ProductDAO.java:15) com.dataobjects.impl.ProductDoImpl.listProducts(ProductDoImpl.java:26) com.action.ProductAction.showProducts(ProductAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) application-context.xml is show below <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/jdbc.properties" /> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}" /> <!-- Hibernate SessionFactory --> <!-- class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"--> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref local="dataSource"/> </property> <property name="configLocation"> <value>WEB-INF/classes/hibernate.cfg.xml</value> </property> <property name="configurationClass"> <value>org.hibernate.cfg.AnnotationConfiguration</value> </property> <!-- <property name="annotatedClasses"> <list> <value>com.pojo.Product</value> <value>com.pojo.User</value> <value>com.pojo.UserLogin</value> </list> </property> --> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <!-- User Bean definitions --> <bean name="/logincheck" class="com.action.LoginAction"> <property name="userDo" ref="userDo" /> </bean> <bean id="userDo" class="com.dataobjects.impl.UserDoImpl" > <property name="userDAO" ref="userDAO" /> </bean> <bean id="userDAO" class="com.dao.UserDAO" > <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean name="/listproducts" class="com.action.ProductAction"> <property name="productDo" ref="productDo" /> </bean> <bean id="productDo" class="com.dataobjects.impl.ProductDoImpl" > <property name="productDAO" ref="productDAO" /> </bean> <bean id="productDAO" class="com.dao.ProductDAO" > <property name="sessionFactory" ref="sessionFactory" /> </bean> And DAO class is public class ProductDAO extends HibernateDaoSupport{ public List listProducts(int startIndex, int incrementor) { org.hibernate.Session session = getHibernateTemplate().getSessionFactory().getCurrentSession(); String queryString = "SELECT * FROM product ORDER BY unitprice ASC LIMIT 6 OFFSET 0"; List list = null; try{ session.beginTransaction(); org.hibernate.Query query = session.createQuery(queryString); list = query.list(); session.getTransaction().commit(); } catch(Exception e) { e.printStackTrace(); } finally { session.close(); } return list; } public List getProductCount() { String queryString = "SELECT COUNT(*) FROM Product"; return getHibernateTemplate().find(queryString); } } Any thoughts to fix it up?

    Read the article

  • JDBC Lock a row using SELECT FOR UPDATE, doesn't work

    - by Rachid
    I am having issues with MySQL's SELECT .. FOR UPDATE, here is the query I am trying to run: SELECT * FROM tableName WHERE HostName='UnknownHost' ORDER BY UpdateTimestamp asc limit 1 FOR UPDATE After this, the concerned thread will do an UPDATE and change the HostName, which is then it should unlock the row. I am running a multi-threaded java application, so 3 threads are running this SQL statement, but when thread 1 runs this, it doesn't lock its results from thread 2 & 3. Therefore threads 2 & 3 are getting the same results and they could update the same row. Also each thread is on its own mysql connection. I'm using Innodb, with transaction-isolation = READ-COMMITTED, and the Autocommit is off before executing the select for update may I miss something? OR perhaps there is a better solution? Thanks a lot. Code : public BasicJDBCDemo() { Le_Thread newThread1=new Le_Thread(); Le_Thread newThread2=new Le_Thread(); newThread1.start(); newThread2.start(); } Thread : class Le_Thread extends Thread { public void run() { tring name = Thread.currentThread().getName(); System.out.println( name+": Debut."); long oid=Util.doSelectLockTest(name); Util.doUpdateTest(oid,name); } } Select : public static long doSelectLockTest(String threadName) { System.out.println("[OUTPUT FROM SELECT Lock ]...threadName="+threadName); PreparedStatement pst = null; ResultSet rs=null; Connection conn=null; long oid=0; try { String query = "SELECT * FROM table WHERE Host=? ORDER BY Timestamp asc limit 1 FOR UPDATE"; conn=getNewConnection(); pst = conn.prepareStatement(query); pst.setString(1, DbProperties.UnknownHost); System.out.println("pst="+threadName+"__"+pst); rs = pst.executeQuery(); if (rs.first()) { String s = rs.getString("HostName"); oid = rs.getLong("OID"); System.out.println("oid_oldest/host/threadName=="+oid+"/"+s+"/"+threadName); } } catch (SQLException ex) { ex.printStackTrace(); } finally { DBUtil.close(pst); DBUtil.close(rs); DBUtil.close(conn); } return oid; } Please help.... : Result : Thread-1: Debut. Thread-2: Debut. [OUTPUT FROM SELECT Lock ]...threadName=Thread-1 New connection.. [OUTPUT FROM SELECT Lock ]...threadName=Thread-2 New connection.. pst=Thread-2: SELECT * FROM b2biCheckPoint WHERE HostName='UnknownHost' ORDER BY UpdateTimestamp asc limit 1 FOR UPDATE pst=Thread-1: SELECT * FROM b2biCheckPoint WHERE HostName='UnknownHost' ORDER BY UpdateTimestamp asc limit 1 FOR UPDATE oid_oldest/host/threadName==1/UnknownHost/Thread-2 oid_oldest/host/threadName==1/UnknownHost/Thread-1 [Performing UPDATE] ... oid = 1, thread=Thread-2 New connection.. [Performing UPDATE] ... oid = 1, thread=Thread-1 pst_threadname=Thread-2: UPDATE b2bicheckpoint SET HostName='1_host_Thread-2',UpdateTimestamp=1294940161838 where OID = 1 New connection.. pst_threadname=Thread-1: UPDATE b2bicheckpoint SET HostName='1_host_Thread-1',UpdateTimestamp=1294940161853 where OID = 1

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

  • referencing part of the composite primary key

    - by Zavael
    I have problems with setting the reference on database table. I have following structure: CREATE TABLE club( id INTEGER NOT NULL, name_short VARCHAR(30), name_full VARCHAR(70) NOT NULL ); CREATE UNIQUE INDEX club_uix ON club(id); ALTER TABLE club ADD CONSTRAINT club_pk PRIMARY KEY (id); CREATE TABLE team( id INTEGER NOT NULL, club_id INTEGER NOT NULL, team_name VARCHAR(30) ); CREATE UNIQUE INDEX team_uix ON team(id, club_id); ALTER TABLE team ADD CONSTRAINT team_pk PRIMARY KEY (id, club_id); ALTER TABLE team ADD FOREIGN KEY (club_id) REFERENCES club(id); CREATE TABLE person( id INTEGER NOT NULL, first_name VARCHAR(20), last_name VARCHAR(20) NOT NULL ); CREATE UNIQUE INDEX person_uix ON person(id); ALTER TABLE person ADD PRIMARY KEY (id); CREATE TABLE contract( person_id INTEGER NOT NULL, club_id INTEGER NOT NULL, wage INTEGER ); CREATE UNIQUE INDEX contract_uix on contract(person_id); ALTER TABLE contract ADD CONSTRAINT contract_pk PRIMARY KEY (person_id); ALTER TABLE contract ADD FOREIGN KEY (club_id) REFERENCES club(id); ALTER TABLE contract ADD FOREIGN KEY (person_id) REFERENCES person(id); CREATE TABLE player( person_id INTEGER NOT NULL, team_id INTEGER, height SMALLINT, weight SMALLINT ); CREATE UNIQUE INDEX player_uix on player(person_id); ALTER TABLE player ADD CONSTRAINT player_pk PRIMARY KEY (person_id); ALTER TABLE player ADD FOREIGN KEY (person_id) REFERENCES person(id); -- ALTER TABLE player ADD FOREIGN KEY (team_id) REFERENCES team(id); --this is not working It gives me this error: Error code -5529, SQL state 42529: a UNIQUE constraint does not exist on referenced columns: TEAM in statement [ALTER TABLE player ADD FOREIGN KEY (team_id) REFERENCES team(id)] As you can see, team table has composite primary key (club_id + id), the person references club through contract. Person has some common attributes for player and other staff types. One club can have multiple teams. Employed person has to have a contract with a club. Player (is the specification of person) - if emplyed - can be assigned to one of the club's teams. Is there better way to design my structure? I thought about excluding the club_id from team's primary key, but I would like to know if this is the only way. Thanks. UPDATE 1 I would like to have the id as team identification only within the club, so multiple teams can have equal id as long as they belong to different clubs. Is it possible? UPDATE 2 updated the naming convention as adviced by philip Some business rules to better understand the structure: One club can have 1..n teams (Main squad, Reserve squad, Youth squad or Team A, Team B... only team can play match, not club) One team belongs to one club only A player is type of person (other types (staff) are scouts, coaches etc so they do not need to belong to specific team, just to the club, if employed) Person can have 0..1 contract with 1 club (that means he is employed or unemployed) Player (if employed) belongs to one team of the club Now thinking about it - moving team_id from player to contract would solve my problem, and it could hold the condition "Player (if employed) belongs to one team of the club", but it would be redundant for other staff types. What do you think?

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >