Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 377/1330 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • What data is actually stored in a B-tree database in CouchDB?

    - by Andrey Vlasovskikh
    I'm wondering what is actually stored in a CouchDB database B-tree? The CouchDB: The Definitive Guide tells that a database B-tree is used for append-only operations and that a database is stored in a single B-tree (besides per-view B-trees). So I guess the data items that are appended to the database file are revisions of documents, not the whole documents: +---------|### ... | | +------|###|------+ ... ---+ | | | | +------+ +------+ +------+ +------+ | doc1 | | doc2 | | doc1 | ... | doc1 | | rev1 | | rev1 | | rev2 | | rev7 | +------+ +------+ +------+ +------+ Is it true? If it is true, then how the current revision of a document is determined based on such a B-tree? Doesn't it mean, that CouchDB needs a separate "view" database for indexing current revisions of documents to preserve O(log n) access? Wouldn't it lead to race conditions while building such an index? (as far as I know, CouchDB uses no write locks).

    Read the article

  • SQL query duration is longer for smaller dataset?

    - by entens
    I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results. Quick query (averages 4 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC Long query (averages 1 minute 20 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should. Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • minimal cover for functional dependencies

    - by user2975836
    I have the following problem: AB -> CD H->B G ->DA CD-> EF A -> HJ J>G I understand the first step (break down right hand side) and get the following results: AB -> C AB -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G I understand that A - h and h - b, therefore I can remove the B from AB - c and ab - D, to get: A -> C A -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G The step that follows is what I can't compute (reduce the left hand side) Any help will be greatly appreciated.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • mysql and indexes with more than one column

    - by clarkk
    How to use indexes with more than one column The original index has an index on block_id, but is it necesarry when it's already in the unique index with two column? Indexes with more than one column (a,b,c) you can search for a, b and c you can search for a and b you can search for a you can not search for a and c Does this apply to unique indexes too? table id block_id account_id name indexes origin PRIMARY KEY (`id`) UNIQUE KEY `block_id` (`block_id`,`account_id`) KEY `block_id` (`block_id`), KEY `account_id` (`account_id`), indexes alternative PRIMARY KEY (`id`) UNIQUE KEY `block_id` (`block_id`,`account_id`) KEY `account_id` (`account_id`),

    Read the article

  • Correct Sql Script for Formula

    - by Madan Madan
    Can anyone help me write SQL script for the following formula? If DEP = 1 If DROP 1 PLV = 334.86 * exp(0.3541 * ACTIVE_DAYS) + 0.25 * DROP + 20 * DEP Else If DROP < 0 PLV = DROP + 70 * ACTIVE_DAYS Else PLV = 0.25 * DROP + 70 * ACTIVE_DAYS The SQL script which I have is the following SELECT IF(dep=1, if(dep=1, (334.86 * exp(0.3541 * act_days)) + (0.25 * 'drop') + (20 * dep), if('drop'<0, 'drop' + (70 * act_days), (0.25 * 'drop') + (70 * act_days))),'0') as PLV But the above query is not right as something is missing where the formula says Else PLV = 0.26 * DROP Thanks,

    Read the article

  • php connecting to mysql server(localhost) very slow

    - by Ahmad
    actually its little complicated: summary: the connection to DB is very slow. the page rendering takes around 10 seconds but the last statement on the page is an echo and i can see its output while the page is loading in firefox (IE is same). in google chrome the output becomes visible only when the loading finishes. loading time is approximately the same across browsers. on debugging i found out that its the DB connectivity that is creating problem. the DB was on another machine. to debug further. i deployed the DB on my local machine .. so now the DB connection is at 127.0.0.1 but the connectivity still takes long time. this means that the issue is with APACHE/PHP and not with mysql. but then i deployed my code on another machine which connects to DB remotely.and everything seems fine. basically the application uses couple of mod_rewrite.. but i removed all the .htaccess files and the slow connectivity issue remains.. i installed another APACHE on my machine and used default settings. the connection was still very slow. i added following statements to measure the execution time $stime = microtime(); $stime = explode(" ",$stime); $stime = $stime[1] + $stime[0]; // my code -- it involves connection to DB $mtime = microtime(); $mtime = explode(" ",$mtime); $mtime = $mtime[1] + $mtime[0]; $totaltime = ($mtime - $stime); echo $totaltime; the output is 0.0631899833679 but firebug Net panel shows total loading time of 10-11 seconds. same is the case with google chrome i tried to turn off windows firewall.. connectivity is still slow and i just can't quite find the reason.. i've tried multiple DB servers.. multiple apaches.. nothing seems to be working.. any idea of what might be the problem?

    Read the article

  • top-k selection/merge

    - by tcurdt
    I have n sorted lists. These lists are quite long (300000+ tuples). Selecting the top 10 of the individual lists is of course trivial - they are right at the head of the lists. Where it gets more interesting is when I want the top 10 of all the sorted lists. The question is whether there is an algorithm to calculate the combined top 10 having the correct order while cutting off the long tail of the lists. The goal is to reduce the required space. And if there is: How does one find the limit where is is safe to cut? Note: The actual counts are not important. Only the order is.

    Read the article

  • Check value at insert

    - by ThreeFingerMark
    Hello, i have this three tables. Table: Item Columns: ItemID, Title, Content, NoChange (Date) Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID In the Item Table is a Field with NoChange, if this field = true no Tag is allowed to insert a ItemTag value with this ItemID. How can i check this in the insert? For Updates i have this Statement: UPDATE ItemTag SET TagID = ? where ItemID = ? AND TagID = ? AND exists ( select ItemID from Item where ItemID = ? AND NoChange is null)"); Thank you.

    Read the article

  • large databases in sqlite - file size considerations?

    - by Gj
    I'm using a sqlite db which is very convenient and seems to meet all of my needs at this point. Currently my db size is <50MB, but I now need to add a new table which will store large text blobs, which will cause the db to reach up to 5GB within the next year. Would sqlite be able to deal with a 5GB db size? Any caveats to that, compared with say mysql?

    Read the article

  • select for update with ruby oci8

    - by ash34
    how do I do a 'select for update' and then 'update' the row using ruby oci8. I have two fields counter1 and counter2 in a table which has only 1 record. I want to select the values from this table and then increment them by locking the row using select for update. thanks.

    Read the article

  • Back out plan for a Web App

    - by nobody
    We need a back out plan for a web app whose first maintenance release is going to production soon. The issue we are facing is even if we back out new EAR and deploy old one , the data which was keyed in using new release would not support old business rules(current), since there is enormous changes in business rules. Can you suggest how do we tackle this issue?

    Read the article

  • How to build a SQL statement when any combination of user input to the table is possible?

    - by Greg McNulty
    Example: the user fills in everything but the product name. I need to search on what is supplied, so in this case everything but productName= This example could be for any combination of input. Is there a way to do this? Thanks. $name = $_POST['n']; $cat = $_POST['c']; $price = $_POST['p']; if( !($name) ) { $name = some character to select all? } $sql = "SELECT * FROM products WHERE productCategory='$cat' and productName='$name' and productPrice='$price' "; EDIT Solution does not have to protect from attacks. Specifically looking at the dynamic part of it.

    Read the article

  • Can somebody suggest good source for IMS?

    - by Raja Reddy
    I would like to learn working with IMS, can somebody suggest me a good source? I'm not sure if it matters to say that I have quite good exposure and experience with INSYNC DB2 and QMF. So anything that can depict and explain the advantages and disadvantages over IMS would be really helpful. Thanks for your help beforehand..

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >