Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 483/1302 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Using db4o with multiple application instances under medium trust

    - by Anders Fjeldstad
    I recently stumbled over the object database engine db4o which I think looks really interesting. I would like to use it in an ASP.NET MVC application that will be deployed to a shared hosting environment under medium trust. Because of the trust level, I'm restricted to using db4o in embedded/in-process mode. That in itself should be no problem, but the hosting provider also transparently runs each web application in multiple (load-balanced) server instances with shared storage, which I would say is normally a very nice feature for a $10/month hoster. However, since an instance of a db4o server with write access (whether in-process or networked) locks the underlying database file, having multiple instances of the application using the same file won't work (or at least I can't see how it would). So the question is: is it possible to use db4o in this specific environment? I have considered letting each application have its own database which is synchronized with a master database using replication (dRS), but that approach will most likely end up with very frequent bi-directional replication (read master changes at beginning of each request, write to master after each change) which I don't think will be very efficient. Summary of the web application/environment characteristics: Read-intensive (but not entirely read-only) Some delay (a few seconds) is acceptible between the time that a change is made and the time when the change shows up in all the application instances' data Must run in medium trust No guarantee that the load-balancer uses "sticky sessions" All suggestions are much appreciated!

    Read the article

  • Enabling Service Broker in SQL Server 2008

    - by Truegilly
    Hello, I am integrating SqlCacheDependency to use in my LinqToSQL datacontext. I am using an extension class for Linq querys found here - http://code.msdn.microsoft.com/linqtosqlcache I have wired up the code and when I open the page I get this exception - "The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications." its coming from this event in the global.asax protected void Application_Start() { RegisterRoutes(RouteTable.Routes); //In Application Start Event System.Data.SqlClient.SqlDependency.Start(new dataContextDataContext().Connection.ConnectionString); } my question is... how do i enable Service Broker in my SQL server 2008 database? I have tried to run this query.. ALTER DATABASE tablename SET ENABLE_BROKER but it never ends and runs for ever, I have to manually stop it. once I have this set in SQL server 2008, will it filter down to my DataContext, or do I need to configure something there too ? thanks for any help Truegilly

    Read the article

  • JavaScript variable to ColdFusion variable

    - by Alexander
    I have a tricky one. By means of a <cfoutput query="…"> I list some records in the page from a SQL Server database. By the end of each line viewing I try to add this in to a record in a MySQL database. As you see is simple, because I can use the exact variables from the output query in to my new INSERT INTO statement. BUT: the rsPick.name comes from a database with a different character set and the only way to get it right into my new database is to read it from the web page and not from the value came in the output query. So I read this value with that little JavaScript I made and put it in the myValue variable and then I want ColdFusion to read that variable in order to place it in my SQL statement. <cfoutput query="rsPick"> <tr> <td>#rsPick.ABBREVIATION#</td> <td id="square"> #rsPick.name# </td> <td>#rsPick.Composition#</td> <td> Transaction done... <script type="text/javascript"> var myvalue = document.getElementById("square").innerHTML </script> </td> <cfquery datasource="#Request.Order#"> INSERT INTO products (iniid, abbreviation, clsid, cllid, dfsid, dflid, szsid, szlid, gross, retail, netvaluebc, composition, name) VALUES ( #rsPick.ID#, '#rsPick.ABBREVIATION#', #rsPick.CLSID#, #rsPick.CLLID#, #rsPick.DFSID#, #rsPick.DFLID#, #rsPick.SZSID#, #rsPick.SZLID#, #rsPick.GROSSPRICE#, #rsPick.RETAILPRICE#, #rsPick.NETVALUEBC#, '#rsPick.COMPOSITION#','#MYVALUE#' ) </cfquery> </tr> </cfoutput>

    Read the article

  • WCF fault logging and SQL Exception 4060 error.

    - by Bill
    I have been attempting to compile/run a sample WCF application from Juval Lowy's website (author of Programming WCF Services & founder of IDesign) for several days. The example app utilizes Juval's ServiceModelEx library which logs faults/errors to a "WCFLogbook" SQL database. Unfortunately, when the sample app faults, I get the following error: SQL Exception 4060: "Cannot open database \"WCFLogbook\" requested by the login. The login failed.\r\nLogin failed for user 'Bill-PC\Bill'." I confirmed that the SQL WCFLogbook database has been created and have granted all of the appropriate permissions for my (Bill-PC\Bill) access to the database. Additionally port 8006 and port 1433 have been opened in the Firewall. TCP/IP has been enabled and "Allow remote connections to this server" has been checked. I am using the following endpoint within the App.Config file: <client> <endpoint name="LogbookTCP" address="net.tcp://Bill-PC:8006/LogbookManager" binding="netTcpBinding" contract="ILogbookManager" /> </client> Unfortunately SQL is a 'world' that I hadn't needed to venture into before now and I am terribly frustrated with my lack of success. Would anyone have any other suggestions on how to get this working? Have I missed anything?

    Read the article

  • Map/Reduce on an array of hashes in CouchDB

    - by sebastiangeiger
    Hello everyone, I am looking for a map/reduce function to calculate the status in a Design Document. Below you can see an example document from my current database. { "_id": "0238f1414f2f95a47266ca43709a6591", "_rev": "22-24a741981b4de71f33cc70c7e5744442", "status": "retrieved image urls", "term": "Lucas Winter", "urls": [ { "status": "retrieved", "url": "http://...." }, { "status": "retrieved", "url": "http://..." } ], "search_depth": 1, "possible_labels": { "gender": "male" }, "couchrest-type": "SearchTerm" } I'd like to get rid of the status key and rather calculate it from the statuses of the urls. My current by_status view looks like the following: function(doc) { if (doc['status']) { emit(doc['status'], null); } } I tried some things but nothing actually works. Right now my Map Function looks like this: function(doc) { if(doc.urls){ emit(doc._id, doc.urls) } } And my Reduce Function function(key, value, rereduce){ var reduced_status = "retrieved" for(var url in value){ if(url.status=="new"){ reduced_status = "new"; } } return reduced_status; } The result is that I get retrieved everywhere which is definitely not right. I tried to narrow down the problem and it seems to be that value is no array, when I use the following Reduce Function I get length 1 everywhere, which is impossible because I have 12 documents in my database, each containing between 20 to 200 urls function(key, value, rereduce){ return value.length; } What am I doing wrong? (I know I want you to write code for me and I'm feeling guilty, but right now I do the calculation of the statuses in ruby after getting the data from the database. It would be nice to already get the right data from the database)

    Read the article

  • SQLCMD.EXE generates ugly report. How to format it?

    - by Juri Bogdanov
    I did batch to run SQL query like use [AxDWH_Central_Reporting] GO EXEC sp_spaceused @updateusage = N'TRUE' GO It displays 2 tables and generates some ugly report with some kind of 'P' unneeded letters... See below Changed database context to 'AxDWH_Central_Reporting'. database_name Pdatabase_size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ AxDWH_Central_Reporting P10485.69 MB P7436.85 MB reserved Pdata Pindex_size Punused ------------------P------------------P------------------P------------------ 3121176 KB P3111728 KB P7744 KB P1704 KB ---------------------------------------------------------------- I also tryed to generate 1 table from this procedure with next query declare @dbname sysname, @dbsize bigint, @logsize bigint, @reservedpages bigint select @reservedpages = sum(a.total_pages) from sys.partitions p join sys.allocation_units a on p.partition_id = a.container_id left join sys.internal_tables it on p.object_id = it.object_id select @dbsize = sum(convert(bigint,case when status & 64 = 0 then size else 0 end)), @logsize = sum(convert(bigint,case when status & 64 <> 0 then size else 0 end)) from dbo.sysfiles select 'database name' = db_name(), 'database size' = ltrim(str((convert (dec (15,2),@dbsize) + convert (dec (15,2),@logsize)) * 8192 / 1048576,15,2) + ' MB'), 'unallocated space' = ltrim(str((case when @dbsize >= @reservedpages then (convert (dec (15,2),@dbsize) - convert (dec (15,2),@reservedpages)) * 8192 / 1048576 else 0 end),15,2) + ' MB') But got similar ugly report: database name Pdatabase size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ master P5.75 MB P1.52 MB (1 rows affected) Is it possible to change the layout formatting for report? To make it more beautifull?

    Read the article

  • Call to undefined function query() - using PDO in PHP

    - by webworm
    I have a written a script in PHP that reads from data from a MySQL database. I am new to PHP and I have been using the PDO library. I have been developing on a Windows machine using the WAMP Server 2 and all has been working well. However, when I uploaded my script to the LINUX server where it will be used I get the following error when I run my script. Fatal error: Call to undefined function query() This is the line where the error is occuring ... foreach($dbconn->query($sql) as $row) The variable $dbconn is first defined in my dblogin.php include file which I am listing below. <?php // Login info for the database $db_hostname = 'localhost'; $db_database = 'MY_DATABASE_NAME'; $db_username = 'MY_DATABASE_USER'; $db_password = 'MY_DATABASE_PASSWORD'; $dsn = 'mysql:host=' . $db_hostname . ';dbname=' . $db_database . ';'; try { $dbconn = new PDO($dsn, $db_username, $db_password); $dbconn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch (PDOException $e) { echo 'Error connecting to database: ' . $e->getMessage(); } ?> Inside the function where the error occurs I have the database connection defined as a superglobal as such ... global $dbconn; I am a bit confused as to what is happening since all worked well on my development machine. I was wondering if the PDO library was installed but from what I thought it was installed by default as part of PHP v5. The script is running on an Ubuntu (5.0.51a-3ubuntu5.4) machine and the PHP version is 5.2.4. Thank you for any suggestions. I am really lost on this.

    Read the article

  • VB.NET error: Microsoft.ACE.OLEDB.12.0 provider is not registered [RESOLVED]

    - by Azim
    I have a Visual Studio 2008 solution with two projects (a Word-Template project and a VB.Net console application for testing). Both projects reference a database project which opens a connection to an MS-Access 2007 database file and have references to System.Data.OleDb. In the database project I have a function which retrieves a data table as follows private class AdminDatabase ' stores the connection string which is set in the New() method dim strAdminConnection as string public sub New() ... adminName = dlgopen.FileName conAdminDB = New OleDbConnection conAdminDB.ConnectionString = "Data Source='" + adminName + "';" + _ "Provider=Microsoft.ACE.OLEDB.12.0" ' store the connection string in strAdminConnection strAdminConnection = conAdminDB.ConnectionString.ToString() My.Settings.SetUserOverride("AdminConnectionString", strAdminConnection) ... End Sub ' retrieves data from the database Public Function getDataTable(ByVal sqlStatement As String) As DataTable Dim ds As New DataSet Dim dt As New DataTable Dim da As New OleDbDataAdapter Dim localCon As New OleDbConnection localCon.ConnectionString = strAdminConnection Using localCon Dim command As OleDbCommand = localCon.CreateCommand() command.CommandText = sqlStatement localCon.Open() da.SelectCommand = command da.Fill(dt) getDataTable = dt End Using End Function End Class When I call this function from my Word 2007 Template project everything works fine; no errors. But when I run it from the console application it throws the following exception ex = {"The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine."} Both projects have the same reference and the console application did work when I first wrote it (a while ago) but now it has stopped work. I must be missing something but I don't know what. Any ideas? Thanks Azim

    Read the article

  • ASP.NET MVC: what mechanic returns ViewModel objects?

    - by Dr. Zim
    As I understand it, Domain Models are classes that only describe the data (aggregate roots). They are POCOs and do not reference outside libraries (nothing special). View models on the other hand are classes that contain domain model objects as well as all the interface specific objects like SelectList. A ViewModel includes using System.Web.Mvc;. A repository pulls data out of a database and feeds them to us through domain model objects. What mechanic or device creates the view model objects, populating them from a database? Would it be a factory that has database access? Would you bleed the view specific classes like System.Web.Mvc in to the Repository? Something else? For example, if you have a drop down list of cities, you would reference a SelectList object in the root of your View Model object, right next to your DomainModel reference: public class CustomerForm { public CustomerAddress address {get;set;} public SelectList cities {get;set;} } The cities should come from a database and be in the form of a select list object. The hope is that you don't create a special Repository method to extract out just the distinct cities, then create a redundant second SelectList object only so you have the right data types.

    Read the article

  • Loading Liferay Properties from Spring IoC container (to get jdbc connection parameters)

    - by mox601
    I'm developing some portlets for Liferay Portal 5.2.3 with bundled tomcat 6.0.18 using Spring IoC container. I need to map the User_ table used in Liferay database to an entity with Hibernate, so I need to use two different dataSources to separate the liferay db from the db used by portlets. My jdbc.properties has to hold all connection parameters for both databases: no problem for the one used by portlets, but I am having issues determining which database uses liferay to hold its data. My conclusion is that i should have something like this: liferayConnection.url=jdbc:hsqldb:${liferay.home}/data/hsql/lportal in order to get the database url dynamically loaded, according to Liferay properties found in portal-ext.properties. (Or, better, load the whole portal-ext.properties and read database properties from there). The problem is that the placeholder is not resolved: Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'liferayDataSource' defined in class path resource [WEB-INF/applicationContext.xml]: Could not resolve placeholder 'liferay.home' To dodge this problem I tried to load explicitly portal-ext.properties with a Spring bean: <bean id="liferayPropertiesConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="../../portal-ext.properties"/> but no luck: liferay.home is not resolved but there aren't other errors. How can I resolve the placeholder defined by Liferay? Thanks

    Read the article

  • SQL Compact error: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found

    - by Ciaran Bruen
    Hi - I'm developing a Winforms application using Visual Studio 2008 C# that uses a SQL compact 3.5 database on the client. The client will most likely be 32 bit XP or Vista machines. I'm using a standard Windows Installer project that creates an msi file and setup.exe to install the app on a client machine. I'm new to SQL compact so I haven't had to distribute a client database like this before now. When I run the setup.exe (on new Windows XP 32 bit with SP2 and IE 7) it installs fine but when I run the app I get the error below: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found I spent a few hours searching this error already but all I can find are issues relating to installing on 64 bit Windows, none relating to normal 32 bit that I'm using. The install app copies the all the dependant files that it found into the specified install directory, including the System.Data.SqlServerCe.dll file (assembly version 3.5.1.0). The database file is in a directory called 'data' off the application directory, and the connection string for it is <add name="Tickets.ieOutlet.Properties.Settings.TicketsLocalConnectionString" connectionString="Data Source=|DataDirectory|\data\TicketsLocal.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> Some questions I have: should the app be able to find the dll if it's in the same directory i.e. local to the app, or do I need to install it in the GAC? (If so cam I use the Windows Installer to install a dll in the GAC?) is there anything else I need to distribute with the app in order to use a Sql Compact database? there are other dlls also such as MS interop for exporting data to Excel on the client. Do these need to be installed in the GAC or will locating them in the application directory suffice? TIA, Ciaran.

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • Character Encoding: â??

    - by akaphenom
    I am trying to piece together the mysterious string of characters â?? I am seeing quite a bit of in our database - I am fairly sure this is a result of conversion between character encodings, but I am not completely positive. The users are able to enter text (or cut and paste) into a Ext-Js rich text editor. The data is posted to a severlet which persists it to the database, and when I view it in the database i see those strange characters... is there any way to decode these back to their original meaning, if I was able to discover the correct encoding - or is there a loss of bits or bytes that has occured through the conversion process? Users are cutting and pasting from multiple versions of MS Word and PDF. Does the encoding follow where the user copied from? Thank you website is UTF-8 We are using ms sql server 2005; SELECT serverproperty('Collation') -- Server default collation. Latin1_General_CI_AS SELECT databasepropertyex('xxxx', 'Collation') -- Database default SQL_Latin1_General_CP1_CI_AS and the column: Column_name Type Computed Length Prec Scale Nullable TrimTrailingBlanks FixedLenNullInSource Collation text varchar no -1 yes no yes SQL_Latin1_General_CP1_CI_AS The non-Unicode equivalents of the nchar, nvarchar, and ntext data types in SQL Server 2000 are listed below. When Unicode data is inserted into one of these non-Unicode data type columns through a command string (otherwise known as a "language event"), SQL Server converts the data to the data type using the code page associated with the collation of the column. When a character cannot be represented on a code page, it is replaced by a question mark (?), indicating the data has been lost. Appearance of unexpected characters or question marks in your data indicates your data has been converted from Unicode to non-Unicode at some layer, and this conversion resulted in lost characters. So this may be the root cause of the problem... and not an easy one to solve on our end.

    Read the article

  • What is the correct connection string for clsql when accessing ms sqlserver using odbc?

    - by nunb
    I am accessing a database on another machine from an OS X server. After setting up freetds through macports and creating the freetds.conf file like so: dump file = /tmp/freetds.log # nunb our Microsoft server [winnt] host = 192.168.0.2 port = 1433 tds version = 8.0 I have the following test commands that work: Test freetds works: tsql -S winnt -U sa 1> use myDB; 2> select count (*) from "sysObjects"; 3> go ODBC is setup through /Applications/Utilities/ODBC\ Administrator.app, with dsn "gmb" using the freeTDS driver and a ServerName of "winnt" -- testing it yields: iodbctest "dsn=gmb;uid=sa;pwd=foo" SQL>select count (*) from "sysObjects"; = 792 Now I run the following code in lisp: (require 'asdf) (setf asdf:*central-registry* nil) (push #P"/Users/way/ff/clbuild/systems/" asdf:*central-registry*) (asdf:oos 'asdf:load-op 'cffi) (asdf:oos 'asdf:load-op 'clsql) (asdf:operate 'asdf:test-op 'cffi) (asdf:oos 'asdf:load-op 'clsql-odbc) (asdf:oos 'asdf:test-op 'clsql-odbc) (in-package :clsql-user) (connect '("gmb" "sa" "foo") :database-type :odbc) This drops me in the debugger with the error: debugger invoked on a SQL-DATABASE-ERROR in thread #<THREAD "initial thread" RUNNING {1194EA31}>: A database error occurred: NIL / IM002 [iODBC][Driver Manager]Data source name not found and no default driver specified. Driver could not be loaded Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL

    Read the article

  • ASP.NET and VB.NET OleDbConnection Problem

    - by Matt
    I'm working on an ASP.NET website where I am using an asp:repeater with paging done through a VB.NET code-behind file. I'm having trouble with the database connection though. As far as I can tell, the paging is working, but I can't get the data to be certain. The database is a Microsoft Access database. The function that should be accessing the database is: Dim pagedData As New PagedDataSource Sub Page_Load(ByVal obj As Object, ByVal e As EventArgs) doPaging() End Sub Function getTheData() As DataTable Dim DS As New DataSet() Dim strConnect As New OleDbConnection("Provider = Microsoft.Jet.OLEDB.4.0;Data Source=App_Data/ArtDatabase.mdb") Dim objOleDBAdapter As New OleDbDataAdapter("SELECT ArtID, FileLocation, Title, UserName, ArtDate FROM Art ORDER BY Art.ArtDate DESC", strConnect) objOleDBAdapter.Fill(DS, "Art") Return DS.Tables("Art").Copy End Function Sub doPaging() pagedData.DataSource = getTheData().DefaultView pagedData.AllowPaging = True pagedData.PageSize = 2 Try pagedData.CurrentPageIndex = Int32.Parse(Request.QueryString("Page")).ToString() Catch ex As Exception pagedData.CurrentPageIndex = 0 End Try btnPrev.Visible = (Not pagedData.IsFirstPage) btnNext.Visible = (Not pagedData.IsLastPage) pageNumber.Text = (pagedData.CurrentPageIndex + 1) & " of " & pagedData.PageCount ArtRepeater.DataSource = pagedData ArtRepeater.DataBind() End Sub The ASP.NET is: <asp:Repeater ID="ArtRepeater" runat="server"> <HeaderTemplate> <h2>Items in Selected Category:</h2> </HeaderTemplate> <ItemTemplate> <li> <asp:HyperLink runat="server" ID="HyperLink" NavigateUrl='<%# Eval("ArtID", "ArtPiece.aspx?ArtID={0}") %>'> <img src="<%# Eval("FileLocation") %>" alt="<%# DataBinder.Eval(Container.DataItem, "Title") %>t"/> <br /> <%# DataBinder.Eval(Container.DataItem, "Title") %> </asp:HyperLink> </li> </ItemTemplate> </asp:Repeater>

    Read the article

  • iPhone SQLite Password Field Encryption

    - by Louis Russell
    Good Afternoon Guys and Girls, Hopefully this will be a quick and easy question. I am building an App that requires the user to input their login details for an online service that it links to. Multiple login details can be added and saved as the user may have several accounts that they would like to switch between. These details will be stored in an SQLite database and will contain their passwords. Now the questions are: 1: Should these passwords be encrypted in the database? My instinct would say yes but then I do not know how secure the device and system is and if this is necessary. 2: If they should be encrypted what should I use? I think encrypting the whole database file sounds a bit over-kill so should I just encrypt the password before saving it to the database? If this is case what could I use to achieve this? I have found reference to a "crypt(3)" but am having trouble finding much about it or how to implement it. I eagerly await your replies!

    Read the article

  • Searching a list of tuples in python

    - by Niclas Nilsson
    I'm having a database (sqlite) of members of an organisation (less then 200 people). Now I'm trying to write an wx app that will search the database and return some contact information in a wx.grid. The app will have 2 TextCtrls, one for the first name and one for the last name. What I want to do here is make it possible to only write one or a few letters in the textctrls and that will start to return result. So, if I search "John Smith" I write "Jo" in the first TextCtrl and that will return every single John (or any one else having a name starting with those letters). It will not have an "search"-button, instead it will start searching whenever I press a key. One way to solve this would be to search the database with like " SELECT * FROM contactlistview WHERE forname LIKE 'Jo%' " But that seems like a bad idea (very database heavy to do that for every keystroke?). Instead i thought of use fetchall() on a query like this " SELECT * FROM contactlistview " and then, for every keystroke, search the list of tuples that the query have returned. And that is my problem: Searching a list is not that difficult but how can I search a list of tuples with wildcards?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • Linq to SQL case sensitivity causing problems

    - by Roger Lipscombe
    I've seen this question, but that's asking for case-insensitive comparisons when the database is case-sensitive. I'm having a problem with the exact opposite. I'm using SQL Server 2005, my database collation is set to Latin1_General_CI_AS. I've got a table, "User", like this: CREATE TABLE [dbo].[User] ( [Id] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](max) NOT NULL, CONSTRAINT [PK_Example] PRIMARY KEY CLUSTERED ( [Id] ASC ) ) And I'm using the following code to populate it: string[] names = new[] { "Bob", "bob", "BoB" }; using (MyDataContext dataContext = new AppCompatDataContext()) { foreach (var name in names) { string s = name; if (dataContext.Users.SingleOrDefault(u => u.Name == s) == null) dataContext.Users.InsertOnSubmit(new User { Name = name }); } dataContext.SubmitChanges(); } When I run this the first time, I end up with "Bob", "bob" and "BoB" in the table. When I run it again, I get an InvalidOperationException: "Sequence contains more than one element", because the query against the database returns all 3 rows, and... SELECT * FROM [User] WHERE Name = 'bob' ... is case-insensitive. That is: when I'm inserting rows, Linq to SQL appears to use C# case-sensitive comparisons. When I query later, Linq to SQL uses SQL Server case-insensitive comparisons. I'd like the initial insert to use case-insensitive comparisons, but when I change the code as follows... if (dataContext.Users.SingleOrDefault(u => u.Name.Equals(s, StringComparison.InvariantCultureIgnoreCase) ) == null) ... I get a NotSupportedException: "Method 'Boolean Equals(System.String, System.StringComparison)' has no supported translation to SQL." Question: how do I get the initial insert to be case-insensitive or, more precisely, match the collation of the column in the database? Update: This doesn't appear to be my problem. My problem appears to be that SingleOrDefault doesn't actually look at the pending inserts at all.

    Read the article

  • VSDBCMD deployment for additions to third party databases

    - by Sam
    We have some custom objects (stored procedures etc) in an SQL Server 2005 database belonging to an ERP system. The custom objects are in different schemas to the ERP objects. We're using Database Edition .dbproj projects and vsdbcmd deployment for all our custom application databases and would like to similarly manage our custom objects in the ERP database. It's not clear how this can be done without either: Importing all ERP objects (~4000 tables) into the .dbproj and manually keeping them in sync with ERP development. Visual Studio fell over the only time I tried importing these, so I've no idea whether it can actually handle a project of this size. Somehow excluding the ERP schemas (there are two) from the diff process to ensure they don't get dropped by vsdbcmd. I haven't found any documentation which suggests this is possible. I'm aware of the IgnoreDefaultSchema setting, but there are two schemas I need to ignore and I'm not comfortable with the 'default schema' approach - deployment by different users could be disasterous. Has anyone managed to successfully use .dbproj & vsdbcmd for custom additions to a third party database? If not, how do you manage SQL source control & deployment?

    Read the article

  • JVM/CLR Source-compatible Language Options

    - by Nathan Voxland
    I have an open source Java database migration tool (http://www.liquibase.org) which I am considering porting to .Net. The majority of the tool (at least from a complexity side) is around logic like "if you are adding a primary key and the database is Oracle use this SQL. If database is MySQL use this SQL. If the primary key is named and the database is Postgres use this SQL". I could fork the Java codebase and covert it (manually and/or automatically), but as updates and bug fixes to the above logic come in I do not want to have to apply it to both versions. What I would like to do is move all that logic into a form that can be compiled and used by both Java and .Net versions naively. The code I am looking to convert does not contain any advanced library usage (JDBC, System.out, etc) that would vary significantly from Java to .Net, so I don't think that will be an issue (at worst it can be designed around). So what I am looking for is: A language in which I can code common parts of my app in and compile it into classes usable by the "standard" languages on the target platform Does not add any runtime requirements to the system Nothing so strange that it scares away potential contributors I know Python and Ruby both have implementations on for the JVM and CLR. How well do they fit my requirements? Has anyone been successful (or unsuccesful) using this technique for cross-platform applications? Are there any gotcha's I need to worry about?

    Read the article

  • CodeIgniter's Scaffolding and Helper Functions Not Working

    - by 01010011
    Hi, I'm following CodeIgniter's tutorial "Create a blog in 20 minutes" and I am having trouble getting the helper, anchor and Scaffolding functions to work. I can't seem to create links on my HTML page using the helper and anchor functions. I put $this->load->helper('url'); $this->load->helper('form'); in the constructor under parent::Controller(); and <p>&lt;?php anchor('blog/comments','Comments'); ?&gt;</p> within the foreach loop as specified in the tutorial. But Im not getting the links to appear. Secondly, I keep getting a 404 Page Not Found error whenever I try to access CodeIgniter's Scaffolding page in my browser, like so: localhost/codeignitor/index.php/blog/scaffolding/mysecretword I can access localhost/codeignitor/index.php/blog just fine. I followed CodeIgnitor's instructions in their "Create a blog in 20 minutes" by storing my database settings in the database.php file; and automatically connecting to the database by inserting "database" in the core array of the autoload.php; and I've added both parent::Controller(); and $this->load->scaffolding('myTableName') to blog's constructor. It still gives me this 404. Any assistance will be appreciated. Thanks in advance?

    Read the article

  • Sybase: how can I remove non-printable characters from CHAR or VARCHAR fields with SQL?

    - by Kenny Drobnack
    I'm working with a Sybase database that seems to have non-printable characters in some of the string fields and this is throwing off some of our processing code. At first glance, it seemed to only be newlines and carriage returns, but we also have an ASCII code 27 in there - an ESC character, some accented characters, and some other oddities in there. I have no direct access to change the database, so changing the bad data isn't an option, yet. For now I have to make do with just filtering it out. We're trying to export the table data from one database and load it into a database used by another application in a nightly batch process. Ideally, I'd like to have a function that I can pass a list of characters and just have Sybase return the data with those characters removed. I'd like to keep it something we could do in plain SQL if possible. Something like this to remove characters that are ASCII 0 - 31. select str_replace(FIELD1, (0-31), NULL) as FIELD1, str_replace(FIELD2, (0-31), NULL) as FIELD2 from TABLE So far, str_replace is the nearest I can find, but it only allows replacing one string with another. No support for character ranges and won't let me do the above. We're running on Sybase ASE 12.5 on Unix servers.

    Read the article

  • Apply jump menu functionality on select box styled with jquery

    - by ktsixit
    Hi all, I am using this script to apply style on a select box. It works fine as a standalone version. But I also need to apply a jump menu functionality on this select box. I tried adding a function but it seems that there is some conflict between the two scripts. The jquery styling and the jump menu function can't work together at the same time. Can you help me fix it? This is a sample code to let you know what I'm dealing with here: <script type="text/javascript"> $(document).ready(function() { $('#jumpMenu').selectbox({debug: true}); function MM_jumpMenu(targ,selObj,restore){ //v3.0 eval(targ+".location='"+selObj.options[selObj.selectedIndex].value+"'"); if (restore) selObj.selectedIndex=0; } }); </script> <select name="jumpMenu" id="jumpMenu" onchange="MM_jumpMenu('parent',this,0)"> <option value="http://www.link1.com">link1</option> <option value="http://www.link2.com">link2</option> <option value="http://www.link3.com">link3</option> </select>

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >