Search Results

Search found 30375 results on 1215 pages for 'database smell'.

Page 532/1215 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • OpeVPN log connecting client IPs

    - by TossUser
    I looking for the best solution to log all connecting client's ip to either a text file or a database who logs into my VPN server. Under the IP I mean the public WAN IP on the internet where they are connecting from. A hack could definitely be to make the openvpn server log to a separate logfile and run logtail periodically to extract the necessary information. So the database I want to build would look like: Client_Name | Client_IP | Connection_date roadwarr1 | 72.84.99.11 | 03/04/14 - 22:44:00 Sat Please don't recommend me to use the commercial Openvpn Access Server. That's not a real solution here. If the disconnection date could be determined that would be even better so I could see how long a client was connected and from where! Thank you

    Read the article

  • Search desktop files using a list of keywords stored in a text file

    - by Tod1d
    I have a list of 1285 keywords (database object names) that I have compiled into a TXT file; one keyword per line. I would like to search a directory of files (most have a .aspx or .cs extension) using this list of keywords. My main goal is find out which of the 1285 database objects are being referenced in these files. Can anyone recommend a tool that could accomplish this? Otherwise, I will just create my own. Thanks.

    Read the article

  • PHP/Oracle Connectivity randomly "drops out"

    - by user20555
    Hi! Here's the current situation - I have two web servers (for now named A and B) and two database servers (named C and D). The web servers are quite old, and are running an early version of Apache 2 + PHP4, while the DB servers are running Oracle 9i and 10g respectively. We're experiencing a strange problem connecting (via PHP code) to database A while on web server B only. Web server A has no issues at all... Randomly, web server B will report a "Not connected to Oracle" error (3114). I can't see a real pattern with this, but refreshing a few times seems to fix the issue. Apparently there are no drop-outs on the network interface, which leads me to believe that there's some misconfiguration between PHP/Apache and Oracle (which uses connection pooling). We're running SunOS 5.8... Any ideas?

    Read the article

  • Suspicious activity in access logs - someone trying to find phpmyadmin dir - should I worry?

    - by undefined
    I was looking over the access logs for a server that we are running on Amazon Web Services. I noticed that someone was obviously trying to find the phpmyadmin directory - they (or a bot) were trying different paths eg - admin/phpmyadmin/, db_admin, ... and the list goes on. Actually there isnt a database on this server and so this was not a problem, they were never going to find it, but should I be worried about such snooping? Is this just a really basic attempt at getting in to our system? Actually our database is held on another managed server which I assume is protected from such intrusions. What are your views on such sneaky activity?

    Read the article

  • Realtime website slowness (PHP/MYSQL)

    - by 3s2ng
    We got a website with realtime transactions. The past few days we experienced some slowness during a period of time. Usually the slowness last for 5 minutes. And also during the slowness we noticed that sometimes we can not connect to SSH and FTP or sometimes very slow to connect to those services. Currently we are trying to identify the issue. We already setup database monitoring tool. Now are about to signup with pingdom.com My question is. If the website is too slow due to the database (table or row lock) will it affect the other services like SSH and FTP? Does the ping correlates the page load and the connection between my PC and server? Thanks, Mark

    Read the article

  • Windows Service stops on Server Patch

    - by Carel
    I'm a developer that has a Windows Service that runs on a production server that sends emails that are entered into a database on a database server. Although the service is set to start automatically, whenever the web server gets patched (which happens every other week), for some reason the service fails to start and various emails don't get sent. I don't actually have access to the server, so I have to request a build administrator to start the service. What I want to know is whether there is any reason for the service to fail to start when the server is patched?

    Read the article

  • Need help configuring NAT

    - by QuinnFTW
    First of all, the router I am using is a Cisco WRVS4400N. My company runs a software which handles the MySQL database of all of their products. The software now has an e-commerce module, so I have to set up a secure tunnel from our network to the server that will be hosting our e-commerce site so that when the database is updated, the site will also be updated. The technician completeing the job said there is an IP conflict, and has asked me to NAT 192.168.0.0/24 to 192.168.115.0/24. I am not really sure how to do this, and they want to charge $150 an hour to do it for me. Can anyone help?

    Read the article

  • We have a Solaris 9 server running Oracle 10G and have been getting memory consumption errors for a few weeks now

    - by another_netadmin
    We recently upgraded our Enterprise application and everything worked ok until one weekend when we did a server reboot, ever since then we have run into memory errors. The server has 4GB of physical memory installed and the kernel parameters are set to the following (/etc/system). I'm not an Oracle guy so I'm not sure where to start looking but any informaiton is greatly appreciated. Thanks in advance. There are two databases running on this server, one is a production database and the other is a pre-production database. [root@bandb /]# cat /etc/system | grep seminfo set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=400 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 [root@bandb /]# cat /etc/system | grep shminfo set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 [root@bandb /]#

    Read the article

  • Spring security and MySQL under CentOS

    - by user223268
    i'm trying to connect to MySQL using spring security, spring should access the database and check the user and pass using direct sql. the problem is when i use localhost to access my local database nothing happen no exceptions no any thing but login fails. if i changed the host of the server to one of my team machine IP address the program login successfully. the only deference is that i'm using CentOS 6.5 and my team is using Windows. how can i make sure i'm configuring MySQL correctly and what privileges should i grand to my users to be able to finish this. note: i'm a newcomer to linux and MySQL server administration.

    Read the article

  • Should Production Windows Web Servers (IIS & SQL) be in a domain?

    - by tlianza
    We have a few web servers and a few database servers. To date, they've been standalone machines that are not part of a domain. The web servers don't talk to each other, and the web servers talk to the database servers via SQL Auth. My concern with putting the machines in a domain together were added complexity - it's one more "thing" running, and doing "things" that could go wrong. risk - if a domain controller fails, am I now putting other machines at risk? However, in certain scenarios it does seem convenient for them to be on a domain, sharing credentials. For example, if I want to give the "services" control on one machine access to another machine (because Remote Desktop craps out) I need to go in and assign privileges on multiple machines - something that I believe Active Directory and Domain Accounts set to simplify. My question: I'm sure there are things I'm not considering here. Is there a best practice?

    Read the article

  • sqlserver.exe uses 100% CPU

    - by Markus
    I've created an application (asp.net) that once a day syncs an entire database through XML-files. The sync first creates an transaction and then clears the databases tables and then starts to parse and insert the new rows into the database. When all the parsing is complete it commits the transaction. This works fine on a SQL Server 2005 (on another machine), but on SQL Server 2005 Express, the process starts to use 100% CPU after a while, and as I log the inserts being made I can see that it just stops inserting. No exception, it just stops inserting. Anyone got any idea what this may be? I've previously run the syncronization on another sql 2005 express (also on another computer), and that worked. The server has only 2GB RAM, could this be the problem?

    Read the article

  • I am trying to link my php form and my sql but having difficulties

    - by user1912599
    I am not sure what I am doing wrong as far as my php goes but I can't get my form to link with my sql. Here are the codes for my form and php code for my link to sql <?php echo displayform(); function displayForm() { $r = ''; //build it $r .='<form action="database.php" method="post">'; //table $r .=displayNiceFormBegin(); $r .=displayRow('FirstName:', '<input type="text" name="fname" id="fname"/>'); $r .=displayRow('LastName:', '<input type="text" name="lname" id="lname"/>'); $r .=displayRow('Address:', '<input type="text" name="address" id ="address"/>'); $r .=displayRow('Phone:', '<input type="text" name="phone" id ="phone"/>'); $r .=displayRow('Deparment:', '<input type="text" name="department"id="department"/>'); $r .=displayRow('', '<input type="submit" value="Submit Registration" />'); $r .=displayNiceFormEnd(); $r .='</form>'; return $r; } function displayRow($left, $right) { $r .= ''; //build it $r .='<tr>'; $r .= '<td>' . $left . '</td>'; $r .= '<td>' . $right . '</td>'; $r .='</tr>'; return $r; } function displayNiceFormBegin(){ $r .=''; //build it $r .= '<table style="background-color: beige; border: 1px dashed #999"><tr><td>'; $r .='<table style="margin:10px">'; return $r; } function displayNiceFormENd() { $r .=''; //build it $r .='</table>'; $r .='</td></tr><table>'; return $r; } ?> <?php $host="localhost"; // Host name $username="695788_ogems"; // Mysql username $password="opd69715"; // Mysql password $db_name="ottawaglandorfems_zzl_ogems"; // Database name $tbl_name=".*"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Get values from form $fname=$_POST['fname']; $lname=$_POST['lname']; $address=$_POST['address']; $phone=$_POST['phone']; $department=$_POST['deparment']; // Insert data into mysql $sql="INSERT INTO $tbl_name(FirstName,LastName,Address,Phone,Department)VALUES('$fname', '$lname', '$address','$phone','$deparment')"; $result=mysql_query($sql); // if successfully insert data into database, displays message "Successful". if($result){ echo "Successful"; echo "<BR>"; echo "<a href='ottawa-glandorfems.org/form3.php'>Back to main page</a>"; } else { echo "ERROR"; } ?> <?php // close connection mysql_close(); ?> I keep getting an error. Thank you!!!!

    Read the article

  • SMO ConnectionContext.StatementTimeout setting is ignored

    - by Woody
    I am successfully using Powershell with SMO to backup most databases. However, I have several large databases in which I receive a "timeout" error "System.Data.SqlClient.SqlException: Timeout expired". The timout consistently occurs at 10 minutes. I have tried setting ConnectionContext.StatementTimeout to 0, 6000, and to [System.Int32]::MaxValue. The setting made no difference. I have found a number of Google references which indicate setting it to 0 makes it unlimited. No matter what I try, the timeouts consistently occur at 10 minutes. I even set Remote Query Timeout on the server to 0 (via Studio Manager) to no avail. Below is my SMO connection where I set the time out and the actual backup function. Further below is the output from my script. UPDATE Interestingly enough, I wrote the backup function in C# using VS 2008 and the timeout override does work within that environment. I am in the process of incorporating that C# process into my Powershell Script until I can find out why the timeout override does not work with just Powershell. This is extremely annoying! function New-SMOconnection { Param ($server, $ApplicationName= "PowerShell SMO", [int]$StatementTimeout = 0 ) # Write-Debug "Function: New-SMOconnection $server $connectionname $commandtimeout" if (test-path variable:\conn) { $conn.connectioncontext.disconnect() } else { $conn = New-Object('Microsoft.SqlServer.Management.Smo.Server') $server } $conn.connectioncontext.applicationName = $applicationName $conn.ConnectionContext.StatementTimeout = $StatementTimeout $conn.connectioncontext.Connect() $conn } $smo = New-SMOConnection -server $server if ($smo.connectioncontext.isopen -eq $false) { Throw "Could not connect to server $($server)." } Function Backup-Database { Param([string]$dbname) $db = $smo.Databases.get_Item($dbname) if (!$db) {"Database $dbname was not found"; Return} $sqldir = $smo.Settings.BackupDirectory + "\$($smo.name -replace ("\\", "$"))" $s = ($server.Split('\'))[0] $basedir = "\\$s\" + $($sqldir -replace (":", "$")) $dt = get-date -format yyyyMMdd-HHmmss $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') $dbbk.Action = 'Database' $dbbk.BackupSetDescription = "Full backup of " + $dbname $dbbk.BackupSetName = $dbname + " Backup" $dbbk.Database = $dbname $dbbk.MediaDescription = "Disk" $target = "$basedir\$dbname\FULL" if (-not(Test-Path $target)) { New-Item $target -ItemType directory | Out-Null} $device = "$sqldir\$dbname\FULL\" + $($server -replace("\\", "$")) + "_" + $dbname + "_FULL_" + $dt + ".bak" $dbbk.Devices.AddDevice($device, 'File') $dbbk.Initialize = $True $dbbk.Incremental = $false $dbbk.LogTruncation = [Microsoft.SqlServer.Management.Smo.BackupTruncateLogType]::Truncate If (!$copyonly) { If ($kill) {$smo.KillAllProcesses($dbname)} $dbbk.SqlBackupAsync($server) } $dbbk } Started SQL backups for server LCFSQLxxx\SQLxxx at 05/06/2010 15:33:16 Statement TimeOut value set to 0. DatabaseName : OperationsManagerDW StartBackupTime : 5/6/2010 3:33:16 PM EndBackupTime : 5/6/2010 3:43:17 PM StartCopyTime : 1/1/0001 12:00:00 AM EndCopyTime : 1/1/0001 12:00:00 AM CopiedFiles : Status : Failed ErrorMessage : System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The backup or restore was aborted. 10 percent processed. 20 percent processed. 30 percent processed. 40 percent processed. 50 percent processed. 60 percent processed. 70 percent processed. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Ended backups at 05/06/2010 15:43:23

    Read the article

  • Issue with Autofac 2 and MVC2 using HttpRequestScoped

    - by Page Brooks
    I'm running into an issue with Autofac2 and MVC2. The problem is that I am trying to resolve a series of dependencies where the root dependency is HttpRequestScoped. When I try to resolve my UnitOfWork (which is Disposable), Autofac fails because the internal disposer is trying to add the UnitOfWork object to an internal disposal list which is null. Maybe I'm registering my dependencies with the wrong lifetimes, but I've tried many different combinations with no luck. The only requirement I have is that MyDataContext lasts for the entire HttpRequest. I've posted a demo version of the code for download here. Autofac modules are set up in web.config Global.asax.cs protected void Application_Start() { string connectionString = "something"; var builder = new ContainerBuilder(); builder.Register(c => new MyDataContext(connectionString)).As<IDatabase>().HttpRequestScoped(); builder.RegisterType<UnitOfWork>().As<IUnitOfWork>().InstancePerDependency(); builder.RegisterType<MyService>().As<IMyService>().InstancePerDependency(); builder.RegisterControllers(Assembly.GetExecutingAssembly()); _containerProvider = new ContainerProvider(builder.Build()); IoCHelper.InitializeWith(new AutofacDependencyResolver(_containerProvider.RequestLifetime)); ControllerBuilder.Current.SetControllerFactory(new AutofacControllerFactory(ContainerProvider)); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } AutofacDependencyResolver.cs public class AutofacDependencyResolver { private readonly ILifetimeScope _scope; public AutofacDependencyResolver(ILifetimeScope scope) { _scope = scope; } public T Resolve<T>() { return _scope.Resolve<T>(); } } IoCHelper.cs public static class IoCHelper { private static AutofacDependencyResolver _resolver; public static void InitializeWith(AutofacDependencyResolver resolver) { _resolver = resolver; } public static T Resolve<T>() { return _resolver.Resolve<T>(); } } UnitOfWork.cs public interface IUnitOfWork : IDisposable { void Commit(); } public class UnitOfWork : IUnitOfWork { private readonly IDatabase _database; public UnitOfWork(IDatabase database) { _database = database; } public static IUnitOfWork Begin() { return IoCHelper.Resolve<IUnitOfWork>(); } public void Commit() { System.Diagnostics.Debug.WriteLine("Commiting"); _database.SubmitChanges(); } public void Dispose() { System.Diagnostics.Debug.WriteLine("Disposing"); } } MyDataContext.cs public interface IDatabase { void SubmitChanges(); } public class MyDataContext : IDatabase { private readonly string _connectionString; public MyDataContext(string connectionString) { _connectionString = connectionString; } public void SubmitChanges() { System.Diagnostics.Debug.WriteLine("Submiting Changes"); } } MyService.cs public interface IMyService { void Add(); } public class MyService : IMyService { private readonly IDatabase _database; public MyService(IDatabase database) { _database = database; } public void Add() { // Use _database. } } HomeController.cs public class HomeController : Controller { private readonly IMyService _myService; public HomeController(IMyService myService) { _myService = myService; } public ActionResult Index() { // NullReferenceException is thrown when trying to // resolve UnitOfWork here. // Doesn't always happen on the first attempt. using(var unitOfWork = UnitOfWork.Begin()) { _myService.Add(); unitOfWork.Commit(); } return View(); } public ActionResult About() { return View(); } }

    Read the article

  • Sharepoint (active directory account creation mode) - Using STSADM

    - by vivek m
    This question is regarding using STSADM command to create new site collection in Active Directory Account creation mode. My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and is also the Database Server. The other VPC is the domain member and it is the IIS web server, POP/SMTP server and it has WSS 3.0 installed. I created a new site using the GUI in Central Admin page. For creating a site collection under the newly created site, I needed to use the STSADM command line tool since it cannot be done from Central Admin page in Active Directory Account creation mode. Thats where i got into a problem- stsadm.exe -o createsite -url http://vivek-c5ba48dca:1111/sites/Sales -owneremail [email protected] -sitetemplate STS#1 The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC) The following is the output from the SHarepoint LOG- * stsadm: Running createsite 9e7d Medium Initializing the configuration database connection. 95kp High Creating site http://vivek-c5ba48dca:1111/sites/Sales in content database WSS_Content_Sharepoint_1111 95kq High Creating top level site at http://vivek-c5ba48dca:1111/sites/Sales 72jz Medium Creating site: URL "/sites/Sales" 72e1 High Unable to get domain DNS or forest DNS for domain sharepointsvc.com. ErrorCode=1212 8jvc Warning #1e0046: Adding user "spsalespadmin" to OU "sharepoint_ou" in domain "sharepointsvc.com" FAILED with HRESULT -2147023684. 72k1 High Cannot create site: "http://vivek-c5ba48dca:1111/sites/Sales" for owner "@\@", Error: , 0x800704bc 8e2s Medium Unknown SPRequest error occurred. More information: 0x800704bc 95ks Critical The site /sites/Sales could not be created. The following exception occured: The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC). 72ju High stsadm: The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC) Callstack: at Microsoft.SharePoint.Library.SPRequest.CreateSite(Guid gApplicationId, String bstrUrl, Int32 lZone, Guid gSiteId, Guid gDatabaseId, String bstrDatabaseServer, String bstrDatabaseName, String bstrDatabaseUsername, String bstrDatabasePassword, String bstrTitle, String bstrDescription, UInt32 nLCID, String bstrWebTemplate, String bstrOwnerLogin, String bstrOwnerUserKey, String bstrOwnerName, String bstrOwnerEmail, String bstrSecondaryContactLogin, String bstrSecondaryContactUserKey, String bstrSecondaryContactName, String bstrSecondaryContactEmail, Boolean bADAccountMode, Boolean bHostHeaderIsSiteName) at Microsoft.SharePoint.Administration.SPSiteCollection.Add(SPContentDataba... 72ju High ...se database, String siteUrl, String title, String description, UInt32 nLCID, String webTemplate, String ownerLogin, String ownerName, String ownerEmail, String secondaryContactLogin, String secondaryContactName, String secondaryContactEmail, String quotaTemplate, String sscRootWebUrl, Boolean useHostHeaderAsSiteName) at Microsoft.SharePoint.Administration.SPSiteCollection.Add(String siteUrl, String title, String description, UInt32 nLCID, String webTemplate, String ownerLogin, String ownerName, String ownerEmail, String secondaryContactLogin, String secondaryContactName, String secondaryContactEmail, Boolean useHostHeaderAsSiteName) at Microsoft.SharePoint.StsAdmin.SPCreateSite.Run(StringDictionary keyValues) at Microsoft.SharePoint.StsAdmin.SPStsAdmin.RunOperation(SPGlobalAdmi... 72ju High ...n globalAdmin, String strOperation, StringDictionary keyValues, SPParamCollection pars) 8wsw High Now terminating ULS (STSADM.EXE, onetnative.dll) * Seems to me that the trouble started with this - Unable to get domain DNS or forest DNS for domain sharepointsvc.com. ErrorCode=1212 Network connection to the sharepointsvc.com domain seems to be fine. C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o getproperty -pn ADAccountDomain <Property Exist="Yes" Value="sharepointsvc.com" /> C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o getproperty -pn ADAccountOU <Property Exist="Yes" Value="sharepoint_ou" /> C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>nslookup sharepointsvc.com Server: vm-winsrvr2003.sharepointsvc.com Address: 192.168.0.5 Name: sharepointsvc.com Addresses: 192.168.0.21, 192.168.0.5 Is there any way of checking the domain connection from within Sharepoint (like using some getproperty of the STSADM tool) Does anyone have any clue about this ? (any pointers would be very helpful) Thanks.

    Read the article

  • Thinking Sphinx not working in test mode

    - by J. Pablo Fernández
    I'm trying to get Thinking Sphinx to work in test mode in Rails. Basically this: ThinkingSphinx::Test.init ThinkingSphinx::Test.start freezes and never comes back. My test and devel configuration is the same for test and devel: dry_setting: &dry_setting adapter: mysql host: localhost encoding: utf8 username: rails password: blahblah development: <<: *dry_setting database: proj_devel socket: /tmp/mysql.sock # sphinx requires it test: <<: *dry_setting database: proj_test socket: /tmp/mysql.sock # sphinx requires it and sphinx.yml development: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin test: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin production: enable_star: 1 min_infix_len: 2 The generated config files, config/development.sphinx.conf and config/test.sphinx.conf only differ in database names, directories and similar things; nothing functional. Generating the index for devel goes without an issue $ rake ts:in (in /Users/pupeno/proj) default config Generating Configuration to /Users/pupeno/proj/config/development.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/development.sphinx.conf'... indexing index 'user_core'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.8% done total 7 docs, 422 bytes total 0.098 sec, 4320.80 bytes/sec, 71.67 docs/sec indexing index 'user_delta'... collected 0 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, nan% done total 0 docs, 0 bytes total 0.010 sec, 0.00 bytes/sec, 0.00 docs/sec distributed index 'user' can not be directly indexed; skipping. but when I try to do it for test it freezes: $ RAILS_ENV=test rake ts:in (in /Users/pupeno/proj) DEPRECATION WARNING: require "activeresource" is deprecated and will be removed in Rails 3. Use require "active_resource" instead.. (called from /Users/pupeno/.rvm/gems/ruby-1.8.7-p249/gems/activeresource-2.3.5/lib/activeresource.rb:2) default config Generating Configuration to /Users/pupeno/proj/config/test.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/test.sphinx.conf'... indexing index 'user_core'... It's been there for more than 10 minutes, the user table has 4 records. The database directory look quite diferently, but I don't know what to make of it: $ ls -l db/sphinx/development/ total 96 -rw-r--r-- 1 pupeno staff 196 Mar 11 18:10 user_core.spa -rw-r--r-- 1 pupeno staff 4982 Mar 11 18:10 user_core.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_core.sph -rw-r--r-- 1 pupeno staff 3067 Mar 11 18:10 user_core.spi -rw-r--r-- 1 pupeno staff 84 Mar 11 18:10 user_core.spm -rw-r--r-- 1 pupeno staff 6832 Mar 11 18:10 user_core.spp -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spa -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_delta.sph -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spi -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spm -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spp $ ls -l db/sphinx/test/ total 0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.spl -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp1 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp2 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp7 Nothing gets added to a log when this happens. Any ideas where to go from here? I can run the command line manually: /opt/local/bin/indexer --config config/test.sphinx.conf --all which generates the output as the rake ts:in, so no help there.

    Read the article

  • Forms Authentication works on dev server but not production server (same SQL db)

    - by Desmond
    Hi, I have the same problem as a previously solved question however, this solution did not help me. I have posted the previous question and answer below: http://stackoverflow.com/questions/2215963/forms-authentication-works-on-dev-server-but-not-production-server-same-sql-db/2963985#2963985 Question: I've never had this problem before, I'm at a total loss. I have a SQL Server 2008 database with ASP.NET Forms Authentication, profiles and roles created and is functional on the development workstation. I can login using the created users without problem. I back up the database on the development computer and restore it on the production server. I xcopy the DLLs and ASP.NET files to the server. I make the necessary changes in the web.config, changing the SQL connection strings to point to the production server database and upload it. I've made sure to generate a machine key and it is the same on both the development web.config and the production web.config. And yet, when I try to login on the production server, the same user that I'm able to login successfully with on the development computer, fails on the production server. There is other content in the database, the schema generated by FluentNHibernate. This content is able to be queried successfully on both development and production servers. This is mind boggling, I believe I've verified everything, but obviously it is still not working and I must have missed something. Please, any ideas? Answer: I ran into a problem with similar symptoms at one point by forgetting to set the applicationName attribute in the web.config under the membership providers element. Users are associated to a specific application. Since I didn't set the applicationName, it defaulted to the application path (something like "/MyApplication"). When it was moved to production, the path changed (for example to "/WebSiteFolder/SomeSubFolder /MyApplication"), so the application name defaulted to the new production path and an association could not be made to the original user accounts that were set up in development. Could your issues possibly be the same as mine? I have this already in my web.config but still get the issue. Any ideas? <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/"/> </providers> </membership> <profile> <providers> <clear/> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ApplicationServices" applicationName="/"/> </providers> </profile> <roleManager enabled="false"> <providers> <clear/> <add connectionStringName="ApplicationServices" applicationName="/" name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> <add applicationName="/" name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </providers> </roleManager> Any help is greatly appriciated.

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • Opening an SQL CE file at runtime with Entity Framework 4

    - by David Veeneman
    I am getting started with Entity Framework 4, and I an creating a demo app as a learning exercise. The app is a simple documentation builder, and it uses a SQL CE store. Each documentation project has its own SQL CE data file, and the user opens one of these files to work on a project. The EDM is very simple. A documentation project is comprised of a list of subjects, each of which has a title, a description, and zero or more notes. So, my entities are Subject, which contains Title and Text properties, and Note, which has Title and Text properties. There is a one-to-many association from Subject to Note. I am trying to figure out how to open an SQL CE data file. A data file must match the schema of the SQL CE database created by EF4's Create Database Wizard, and I will implement a New File use case elsewhere in the app to implement that requirement. Right now, I am just trying to get an existing data file open in the app. I have reproduced my existing 'Open File' code below. I have set it up as a static service class called File Services. The code isn't working quite yet, but there is enough to show what I am trying to do. I am trying to hold the ObjectContext open for entity object updates, disposing it when the file is closed. So, here is my question: Am I on the right track? What do I need to change to make this code work with EF4? Is there an example of how to do this properly? Thanks for your help. My existing code: public static class FileServices { #region Private Fields // Member variables private static EntityConnection m_EntityConnection; private static ObjectContext m_ObjectContext; #endregion #region Service Methods /// <summary> /// Opens an SQL CE database file. /// </summary> /// <param name="filePath">The path to the SQL CE file to open.</param> /// <param name="viewModel">The main window view model.</param> public static void OpenSqlCeFile(string filePath, MainWindowViewModel viewModel) { // Configure an SQL CE connection string var sqlCeConnectionString = string.Format("Data Source={0}", filePath); // Configure an EDM connection string var builder = new EntityConnectionStringBuilder(); builder.Metadata = "res://*/EF4Model.csdl|res://*/EF4Model.ssdl|res://*/EF4Model.msl"; builder.Provider = "System.Data.SqlServerCe"; builder.ProviderConnectionString = sqlCeConnectionString; var entityConnectionString = builder.ToString(); // Connect to the model m_EntityConnection = new EntityConnection(entityConnectionString); m_EntityConnection.Open(); // Create an object context m_ObjectContext = new Model1Container(); // Get all Subject data IQueryable<Subject> subjects = from s in Subjects orderby s.Title select s; // Set view model data property viewModel.Subjects = new ObservableCollection<Subject>(subjects); } /// <summary> /// Closes an SQL CE database file. /// </summary> public static void CloseSqlCeFile() { m_EntityConnection.Close(); m_ObjectContext.Dispose(); } #endregion }

    Read the article

  • JPA IndirectSet changes not reflected in Spring frontend

    - by Jon
    I'm having an issue with Spring JPA and IndirectSets. I have two entities, Parent and Child, defined below. I have a Spring form in which I'm trying to create a new Child and link it to an existing Parent, then have everything reflected in the database and in the web interface. What's happening is that it gets put into the database, but the UI doesn't seem to agree. The two entities that are linked to each other in a OneToMany relationship like so: @Entity @Table(name = "parent", catalog = "myschema", uniqueConstraints = @UniqueConstraint(columnNames = "ChildLinkID")) public class Parent { private Integer id; private String childLinkID; private Set<Child> children = new HashSet<Child>(0); @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @Column(name = "ChildLinkID", unique = true, nullable = false, length = 6) public String getChildLinkID() { return this.childLinkID; } public void setChildLinkID(String childLinkID) { this.childLinkID = childLinkID; } @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "parent") public Set<Child> getChildren() { return this.children; } public void setChildren(Set<Child> children) { this.children = children; } } @Entity @Table(name = "child", catalog = "myschema") public class Child extends private Integer id; private Parent parent; @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "ChildLinkID", referencedColumnName = "ChildLinkID", nullable = false) public Parent getParent() { return this.parent; } public void setParent(Parent parent) { this.parent = parent; } } And of course, assorted simple properties on each of them. Now, the problem is that when I edit those simple properties from my Spring interface, everything works beautifully. I can persist new entities of these types and they'll appear when using the JPATemplate to do a find on, say, all Parents (getJpaTemplate().find("select p from Parent p")) or on individual entities by ID or another property. The problem I'm running into is that now, I'm trying to create a new Child linked to an existing Parent through a link from the Parent's page. Here's the important bits of the Controller (note that I've placed the JPA foo in the controller here to make it clearer; the actual JpaDaoSupport is actually in another class, appropriately tiered): protected Object formBackingObject(HttpServletRequest request) throws Exception { String parentArg = request.getParameter("parent"); int parentId = Integer.parseInt(parentArg); Parent parent = getJpaTemplate().find(Parent.class, parentId); Child child = new Child(); child.setParent(parent); NewChildCommand command = new NewChildCommand(); command.setChild(child); return command; } protected ModelAndView onSubmit(Object cmd) throws Exception { NewChildCommand command = (NewChildCommand)cmd; Child child = command.getChild(); child.getParent().getChildren().add(child); getJpaTemplate().merge(child); return new ModelAndView(new RedirectView(getSuccessView())); } Like I said, I can run through the form and fill in the new values for the Child -- the Parent's details aren't even displayed. When it gets back to the controller, it goes through and saves it to the underlying database, but the interface never reflects it. Once I restart the app, it's all there and populated appropriately. What can I do to clear this up? I've tried to call extra merges, tried refreshes (which gave a transaction exception), everything short of just writing my own database access code. I've made sure that every class has an appropriate equals() and hashCode(), have full JPA debugging on to see that it's making appropriate SQL calls (it doesn't seem to make any new calls to the Child table) and stepped through in the debugger (it's all in IndirectSets, as expected, and between saving and displaying the Parent the object takes on a new memory address). What's my next step?

    Read the article

  • Oracle Unicode problem when using NLS_CHARACTERSET is WE8ISO8859P1 and NLS_NCHAR_CHARACTERSET is AL16UTF16, and ColdFusion as programming language

    - by tsurahman
    I have 2 Oracle 10g database, XE and Enterprise XE Enterprise and this are the data type I've use in the test table and then I tried to test to insert some Unicode char from http://www.sustainablegis.com/unicode/ and the results are XE Enterprise for this test, I use ColdFusion 9 developer edition <cfprocessingDirective pageencoding="utf-8"> <cfset setEncoding("form","utf-8")> <form action="" method="post"> Unicode : <br> <textarea name="txaUnicode" id="txaUnicode" cols="50" rows="10"></textarea> <br><br> Language : <br> <input type="Text" name="txtLanguage" id="txtLanguage"> <br><br> <input type="Submit"> </form> <cfset dsn = "theDSN"> <cfif StructKeyExists(FORM, "FIELDNAMES")> <cfquery name="qryInsert" datasource="#dsn#"> INSERT INTO UNICODE ( C_VARCHAR2, C_CHAR, C_CLOB, C_NVARCHAR2, LANGUAGE ) VALUES ( <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_CHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_LONGVARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXAUNICODE#">, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#FORM.TXTLANGUAGE#"> ) </cfquery> </cfif> <cfquery name="qryUnicode" datasource="#dsn#"> SELECT * FROM UNICODE ORDER BY LANGUAGE </cfquery> <table border="1"> <thead> <tr> <th>LANGUAGE</th> <th>C_VARCHAR2</th> <th>C_CHAR</th> <th>C_CLOB</th> <th>C_NVARCHAR2</th> </tr> </thead> <tbody> <cfoutput query="qryUnicode"> <tr> <td>#qryUnicode.LANGUAGE#</td> <td>#qryUnicode.C_VARCHAR2#</td> <td>#qryUnicode.C_CHAR#</td> <td>#qryUnicode.C_CLOB#</td> <td>#qryUnicode.C_NVARCHAR2#</td> </tr> </cfoutput> </tbody> </table> from this guide http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10749/ch6unicode.htm#i1007297 I think for my Enterprise database it should produce same thing as XE (at least for NVARCHAR2 column) since the typical solution from that guide said: Use NCHAR and NVARCHAR2 datatypes to store Unicode characters Keep WE8ISO8859P1 as the database character set Use AL16UTF16 as the national character set So, how to make it works too in my Enterprise database? Thank you :)

    Read the article

  • Difference between SQL 2005 and SQL 2008 for inserting multiple rows with XML

    - by Sam Dahan
    I am using the following SQL code for inserting multiple rows of data in a table. The data is passed to the stored procedure using an XML variable : INSERT INTO MyTable SELECT SampleTime = T.Item.value('SampleTime[1]', 'datetime'), Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/MyRecord') T(item) I have a whole bunch of unit tests to verify that I am inserting the right information, the right number of records, etc.. when I call the stored procedure. All fine and dandy - that is, until we began to monkey around with the compatibility level of the database. The code above worked beautifully as long as we kept the compatibility level of the DB at 90 (SQL 2005). When we set the compatibility level at 100 (SQL 2008), the unit tests failed, because the stored procedure using the code above times out. The unit tests are dropping the database, re-creating it from scripts, and running the tests on the brand new DB, so it's not - I think - a question of the 'old compatibility level' sticking around. Using the SQL Management studio, I made up a quick test SQL script. Using the same XML chunk, I alter the DB compat level , truncate the table, then use the code above to insert 650 rows. When the level is 90 (SQL 2005), it runs in milliseconds. When the level is 100 (SQL 2008) it sometimes takes over a minute, sometimes runs in milliseconds. I'd appreciate any insight anyone might have into that. EDIT The script takes over a minute to run with my actual data, which has more rows than I show here, is a real table, and has an index. With the following example code, the difference goes between milliseconds and around 5 seconds. --use [master] --ALTER DATABASE MyDB SET compatibility_level =100 use [MyDB] declare @xml xml set @xml = '<?xml version="1.0"?> <Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Record> <SampleTime>2009-01-24T00:00:00</SampleTime> <Volume1>0</Volume1> <Volume2>0</Volume2> </Record> ..... 653 records, sample time spaced out 4 hours ........ </Root>' DECLARE @myTable TABLE( ID int IDENTITY(1,1) NOT NULL, [SampleTime] [datetime] NOT NULL, [Volume1] [float] NULL, [Volume2] [float] NULL) INSERT INTO @myTable select T.Item.value('SampleTime[1]', 'datetime') as SampleTime, Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/Record') T(item) I uncomment the 2 lines at the top, select them and run just that (the ALTER DATABASE statement), then comment the 2 lines, deselect any text and run the whole thing. When I change from 90 to 100, it runs all the time in 5 seconds (I change the level once, but I run the series several times to see if I have consistent results). When I change from 100 to 90, it runs in milliseconds all the time. Just so you can play with it too. I am using SQL Server 2008 R2 standard edition.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >