Search Results

Search found 37714 results on 1509 pages for 'database documentation'.

Page 450/1509 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Trouble with Berkeley DB JE Base API Secondary Databases and Sequences

    - by milosz
    I have a class Document which consists of Id (int) and Url (String). I would like to have a primary index on Id and secondary index on Url. I would also like to have a sequence for Id auto-incrementation. So I create a SecondaryDatabase and then I create a Sequence. During initialisation of the Sequence I get an exception: Exception in thread "main" java.lang.IllegalArgumentException at com.sleepycat.util.UtfOps.getCharLength(UtfOps.java:137) at com.sleepycat.util.UtfOps.bytesToString(UtfOps.java:259) at com.sleepycat.bind.tuple.TupleInput.readString(TupleInput.java:152) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:12) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:1) at com.sleepycat.bind.tuple.TupleBinding.entryToObject(TupleBinding.java:76) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.UrlKeyCreator.createSecondaryKey(UrlKeyCreator.java:20) at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.java:835) at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.java:42) at com.sleepycat.je.Database.notifyTriggers(Database.java:2004) at com.sleepycat.je.Cursor.putNotify(Cursor.java:1692) at com.sleepycat.je.Cursor.putInternal(Cursor.java:1616) at com.sleepycat.je.Cursor.putNoOverwrite(Cursor.java:663) at com.sleepycat.je.Sequence.<init>(Sequence.java:188) at com.sleepycat.je.Database.openSequence(Database.java:546) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyFullTextSearchEngine.init(MyFullTextSearchEngine.java:131) at pl.edu.mimuw.zbd.berkeley.zadanie.testy.MyFullTextSearchEngineTest.main(MyFullTextSearchEngineTest.java:18) It seems that during the initialisation of the sequence the secondary database is forced to update. When I debug the entryToObject method of MyDocumentBiding the bytes that it tries to convert to object seem random. What am I doing wrong?

    Read the article

  • Updating multiple related tables in SQLite with C#

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • SQLConnection.Open(); throwing exception

    - by flavour404
    Hi, Updating an old piece of software but in order to maintain backward compatibility I need to connect to a .mdb (access) database. I am using the following connection but keep getting an exception, why? I have validated the path, database existence etc. and that is all correct. string Server = "localhost"; string Database = drive + "\\btc2\\state\\states.mdb"; string Username = ""; string Password = "Lhotse"; string ConnectionString = "Data Source = " + Server + ";" + "Initial Catalog = " + Database + ";" + "User Id = '';" + "Password = " + Password + ";"; SqlConnection SQLConnection = new SqlConnection(); try { SQLConnection.ConnectionString = ConnectionString; SQLConnection.Open(); } catch (Exception Ex) { // Try to close the connection if (SQLConnection != null) SQLConnection.Dispose(); // //can't connect // // Stop here return false; }

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

  • XML Schema (XSD) to Rails ActiveRecord Mapping?

    - by Incomethax
    I'm looking for a way to convert an XML Schema definition file into an ActiveRecord modeled database. Does anyone know of a tool that happens to do this? So far the best way I've found is to first load the XSD into an RDBMS like postgres or mysql and then have rails connect to do a rake db:schema:dump. This however, only leaves me with a database without rails Models. What would be the best way to import/load this xsd based database into rails?

    Read the article

  • Application Context in Rails

    - by Sean McMains
    Rails comes with a handy session hash into which we can cram stuff to our heart's content. I would, however, like something like ASP's application context, which instead of sharing data only within a single session, will share it with all sessions in the same application. I'm writing a simple dashboard app, and would like to pull data every 5 minutes, rather than every 5 minutes for each session. I could, of course, store the cache update times in a database, but so far haven't needed to set up a database for this app, and would love to avoid that dependency if possible. So, is there any way to get (or simulate) this sort of thing? If there's no way to do it without a database, is there any kind of "fake" database engine that comes with Rails, runs in memory, but doesn't bother persisting data between restarts?

    Read the article

  • Info on type family instances

    - by yairchu
    Intro: While checking out snoyman's "persistent" library I found myself wanting ghci's (or another tool) assistance in figuring out stuff. ghci's :info doesn't seem to work as nicely with type-families and data-families as it does with "plain" types: > :info Maybe data Maybe a = Nothing | Just a -- Defined in Data.Maybe ... > :info Persist.Key Potato -- "Key Potato" defined in example below data family Persist.Key val -- Defined in Database.Persist ... (no info on the structure/identity of the actual instance) One can always look for the instance in the source code, but sometimes it could be hard to find it and it may be hidden in template-haskell generated code etc. Code example: {-# LANGUAGE FlexibleInstances, GeneralizedNewtypeDeriving, MultiParamTypeClasses, TypeFamilies, QuasiQuotes #-} import qualified Database.Persist as Persist import Database.Persist.Sqlite as PSqlite PSqlite.persistSqlite [$persist| Potato name String isTasty Bool luckyNumber Int UniqueId name |] What's going on in the code example above is that Template-Haskell is generating code for us here. All the extensions above except for QuasiQuotes are required because the generated code uses them. I found out what Persist.Key Potato is by doing: -- test.hs: test = PSqlite.persistSqlite [$persist| ... -- ghci: > :l test.hs > import Language.Haskell.TH > import Data.List > runQ test >>= putStrLn . unlines . filter (isInfixOf "Key Potato") . lines . pprint where newtype Database.Persist.Key Potato = PotatoId Int64 type PotatoId = Database.Persist.Key Potato Question: Is there an easier way to get information on instances of type families and data families, using ghci or any other tool?

    Read the article

  • How do the readers fields should work like?

    - by David Marko
    In release notes of CouchDB 0.11 is stated, that it supports readers fields. I guess it should work similary as in Lotus Notes. But unfortunately I cant find any documentation on this topic. Can someone point me to documentation or some brief explanation at least? Thank you David

    Read the article

  • Any good SASS parser for PHP?

    - by Andrew Moore
    I'm currently using a modified CSS Cacheer as an alternative but its syntax is somewhat vague and adoption is, well, abysmally low... Documentation is hard to come by as well. I'm looking to switch to SASS as it has a bigger user base than CSS Cacheer and better documentation. I am aware of phpHaml but it doesn't have support for SASS yet. Any recommendation on a SASS parser for PHP? Preferably it should support SassScript.

    Read the article

  • Create a Session in Django

    - by Reznor
    So far the Documentation for Django has been too technical. How do I create a session and store variables in it or get variables from it? I'm new to the Django framework, hence why the Documentation is too technical. Sessions are my 'last step'.

    Read the article

  • CakePHP Test Fixtures Drop My Tables Permanently After Running A Test Case

    - by Frank
    I'm not sure what I've done wrong in my CakePHP unit test configuration. Every time I run a test case, the model tables associated with my fixtures are missing form my test database. After running an individual test case I have to re-import my database tables using phpMyAdmin. Here are the relevant files: This is the class I'm trying to test comment.php. This table is dropped after the test. App::import('Sanitize'); class Comment extends AppModel{ public $name = 'Comment'; public $actsAs = array('Tree'); public $belongsTo = array('User' => array('fields'=>array('id', 'username'))); public $validate = array( 'text' = array( 'rule' =array('between', 1, 4000), 'required' ='true', 'allowEmpty'='false', 'message' = "You can't leave your comment text empty!") ); database.php class DATABASE_CONFIG { var $default = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'projectdb', 'prefix' = '' ); var $test = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'testprojectdb', 'prefix' = '' ); } My comment.test.php file. This is the table that keeps getting dropped. <?php App::import('Model', 'Comment'); class CommentTestCase extends CakeTestCase { public $fixtures = array('app.comment', 'app.user'); function start(){ $this-Comment =& ClassRegistry::init('Comment'); $this-Comment-useDbConfig = 'test_suite'; } This is my comment_fixture.php class: <?php class CommentFixture extends CakeTestFixture { var $name = "Comment"; var $import = 'Comment'; } And just in case, here is a typical test method in the CommentTestCase class function testMsgNotificationUserComment(){ $user_id = '1'; $submission_id = '1'; $parent_id = $this-Comment-commentOnModel('Submission', $submission_id, '0', $user_id, "Says: A"); $other_user_id = '2'; $msg_id = $this-Comment-commentOnModel('Submission', $submission_id, $parent_id, $other_user_id, "Says: B"); $expected = array(array('Comment'=array('id'=$msg_id, 'text'="Says: B", 'submission_id'=$submission_id, 'topic_id'='0', 'ack'='0'))); $result = $this-Comment-getMessages($user_id); $this-assertEqual($result, $expected); } I've been dealing with this for a day now and I'm starting to be put off by CakePHP's unit testing. In addition to this issue -- Servral times now I've had data inserted into by 'default' database configuration after running tests! What's going on with my configuration?!

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • failsafe jQuery code

    - by David
    Can anyone explain what the jquery documentation is exactly referring to with this statement: "the argument to write failsafe jQuery code using the $ alias, without relying on the global alias" when referring to using the following: jQuery(function($) { }); I have been using jquery for a while now so understand what this code is doing to a certain extent but the phrase used in the documentation about writing failsafe jquery code puzzles me and i am unsure whether it is important or not.

    Read the article

  • Problem: Sorting for GridView/ObjectDataSource changes depending on page

    - by user148298
    I have a GridView tied to an ObjectDataSource using paging. The paging works fine, except that the sort order changes depending on which page of the results is being viewed. This causes items to reappear on subsequent pages among other issues. I traced the problem to my DAL, which reads a page at a time and then sorts it. Obviously the sorting is going to change as the result set size changes. Is there an improvement to this algorithm. I would like to use a datareader if possible: [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public static WordsCollection LoadForCriteria(string sqlCriteria, int maximumRows, int startRowIndex, string sortExpression) { //DEFAULT SORT EXPRESSION if (string.IsNullOrEmpty(sortExpression)) sortExpression = "OrderBy"; //CREATE THE DYNAMIC SQL TO LOAD OBJECT StringBuilder selectQuery = new StringBuilder(); selectQuery.Append("SELECT"); if (maximumRows > 0) selectQuery.Append(" TOP " + (startRowIndex + maximumRows).ToString()); selectQuery.Append(" " + Words.GetColumnNames(string.Empty)); selectQuery.Append(" FROM sw_Words"); string whereClause = string.IsNullOrEmpty(sqlCriteria) ? string.Empty : " WHERE " + sqlCriteria; selectQuery.Append(whereClause); selectQuery.Append(" ORDER BY " + sortExpression); Database database = Token.Instance.Database; DbCommand selectCommand = database.GetSqlStringCommand(selectQuery.ToString()); //EXECUTE THE COMMAND WordsCollection results = new WordsCollection(); int thisIndex = 0; int rowCount = 0; using (IDataReader dr = database.ExecuteReader(selectCommand)) { while (dr.Read() && ((maximumRows < 1) || (rowCount < maximumRows))) { if (thisIndex >= startRowIndex) { Words varWords = new Words(); Words.LoadDataReader(varWords, dr); results.Add(varWords); rowCount++; } thisIndex++; } dr.Close(); } return results; }

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • openerp client customization

    - by iamgopal
    openerp client seems to be nice and working , i would like to hack it and use it as a front end to my open erp solution. but the documentation regarding client side design or customization is poor on openerp site , is there any good reference or documentation available for further digging in to openerp client side coding ? or more : if any similar client solution available that can be plug in to any back end system. ( i.e. rich internet client )

    Read the article

  • openquery issue in SQL Server

    - by George2
    Hello everyone, I am using SQL Server 2008 (let us call this source database server in this question discussion), and in SSMS, I have created a linked server to another SQL Server 2008 database (let us call this destination database server in this question discussion). When I issue statement -- select * from [linked server name].[database name].[dbo].[table name], error will be returned, Linked server "ZS" The OLE DB access interface "SQLNCLI10" returned "NON-CLUSTERED and NOT INTEGRATED "Index" ix_foo_basic_info_nf ", which is incorrect bookmark ordinal 0. When I issue statement -- select * from openquery([linked server name],'select * from [table name]'), there will be no errors, any ideas what is wrong? thanks in advance, George

    Read the article

  • Best Installation Software?

    - by Chris
    I am interested in knowing what the best software would be to build an installation package that performs the following: (1) Installs client application (2) Detects all SQL server instances on network, allowing user to select specific database to upgrade (which would then upgrade database using an embedded SQL script) (3) Installs website on a server/location specified by user, and configures IIS 6.0 and/or 7.0 based on settings that I specify. (4) Creates a simple setup.exe - and allows user to choose installation components (listed above, i.e install client app, sql server database, and/or website), and then download selected components from remove server. I have tried NSIS - as was able to create an installation package that will download a compressed (gzip) component from a remote server, decompress the file, install the components, and then remove the gzip file. So, this worked beautifully. The part where I am stuck is to be able to perform the database upgrade and website install. Any suggestions would be great. Thanks. Chris

    Read the article

  • Peer review maatkits mk-parallel-dump and mk-parallel-restore usage?

    - by Brent
    Hiya Im trying to make use of maatkit as a means of dumping a database and then restoring to another database. For dumps: mk-parallel-dump --user abc --password xyz --databases $db --base-dir /tmp/dump For restore: mk-parallel-restore --create-databases --user abc --password xyz --database devdb /tmp/dump My question is, is my logic and understanding correct, and would it be ok to do it like this. Kind Regards Brent

    Read the article

  • How do I connect to mysql from php ?

    - by roberto
    Hi guys. I'm working through examples from a book on php/mysql development. I'm working on a linux/apache environment. I've set up a database and a user. I attempt to connect with this line of code: $db_server = mysql_connect($db_hostname, $db_username, $db_password); I get this error: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'www-data'@'localhost' (using password: YES) in /var/www/hosts/dj/connect.php on line 3 unable to connect to database: Access denied for user 'www-data'@'localhost' (using password: YES) I can only guess what is happening here: I think www-data is a username for apache. Upon the database connection, the credentials being passed in to mysql are not those of my database user, but rather apache's own credentials. Is that what is happening here? How do I pass in the credentials I've defined for my user ?

    Read the article

  • What does this Eclipse icon mean in a Merge Results view?

    - by user68759
    I just did an svn merge from a branch to a trunk in my Eclipse IDE, and in the Merge Results view, there is this following icon: I am dying to know what it means. I have searched the entire Eclipse documentation and some relevant StackOverflow questions, but couldn't find anything. The CollabNet documentation about Merge Results View explains what a Merge Results View is, but doesn't mention anything about the meanings of its icons. Does anyone know? Thanks!

    Read the article

  • Autocomplete and Dynamic Parameter Passing

    - by abcParsing
    The code below works fine using jQuery UI 1.8 and jQuery 1.4.2 $("#sid_entry_box").autocomplete( {source:"autocomplete_sid.php?database="+database_name, minLength:4, delay:1000, enable:true, cacheLength:1 }); The database name is passed as a get parameter of the php call. In this application, I have two databases selected by a radio button. Since jQuery loads and assigns this function when the document is loaded, the database name is whatever was checked at that momemnt. What I really need to pass to the php call is the following: database=$("input[name=rf_database_option]:checked").val(); Is ther ean easy to understand way to be able to pass a dynamic dom value?

    Read the article

  • Can't connect SQL 2008 Express to Access project

    - by Gerhard
    I just installed SQL 2008 Express and want to create an Access project (ADP). When I get to the Microsoft SQL Server Database Wizard in Access and after clicking on next to create the database I get this message: The new database wizard does not work with the version of Microsoft SQL Server to which you Access project is connected. See the Microsoft Update Web site for the latest information an downloads I can't find any solution to the problem so far. Any ideas why and how to solve this problem?

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >