Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 377/1330 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Oracle (Old?) Joins - A tool/script for conversion?

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? Even better.. Is there a tool/script that will do this conversion for me (Preferred Free)? An optimizer of some sort? I have around 500 of these queries to port.. When was this standard phased out? Any info is appreciated.

    Read the article

  • Web Application Scanner

    - by rajesh
    I want to develop a Web applications to collect or exchange sensitive or personal data, this system would give user a detailed automated report on : • How secure user's website is? • How easily it can be hacked? • Where exactly is the problem and • What are the remedies? Any suggestions????

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • SQL query duration is longer for smaller dataset?

    - by entens
    I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results. Quick query (averages 4 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC Long query (averages 1 minute 20 seconds to return): SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should. Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?

    Read the article

  • What data is actually stored in a B-tree database in CouchDB?

    - by Andrey Vlasovskikh
    I'm wondering what is actually stored in a CouchDB database B-tree? The CouchDB: The Definitive Guide tells that a database B-tree is used for append-only operations and that a database is stored in a single B-tree (besides per-view B-trees). So I guess the data items that are appended to the database file are revisions of documents, not the whole documents: +---------|### ... | | +------|###|------+ ... ---+ | | | | +------+ +------+ +------+ +------+ | doc1 | | doc2 | | doc1 | ... | doc1 | | rev1 | | rev1 | | rev2 | | rev7 | +------+ +------+ +------+ +------+ Is it true? If it is true, then how the current revision of a document is determined based on such a B-tree? Doesn't it mean, that CouchDB needs a separate "view" database for indexing current revisions of documents to preserve O(log n) access? Wouldn't it lead to race conditions while building such an index? (as far as I know, CouchDB uses no write locks).

    Read the article

  • Back out plan for a Web App

    - by nobody
    We need a back out plan for a web app whose first maintenance release is going to production soon. The issue we are facing is even if we back out new EAR and deploy old one , the data which was keyed in using new release would not support old business rules(current), since there is enormous changes in business rules. Can you suggest how do we tackle this issue?

    Read the article

  • select for update with ruby oci8

    - by ash34
    how do I do a 'select for update' and then 'update' the row using ruby oci8. I have two fields counter1 and counter2 in a table which has only 1 record. I want to select the values from this table and then increment them by locking the row using select for update. thanks.

    Read the article

  • ResultSet and aggregation

    - by kachanov
    Ok, I admit my situation is special There is a data system that supports SQL-92 and JDBC interface However the SQL requets are pretty expensive, and in my application I need to retreive the same data multiple times and aggregate it ("group by") on different fields to show different dimensions of the same data. For example on one screen I have three tables that show the same set or records but aggregated by City (1st grid), by Population (2nd grid), by number of babies (3rd grid) This amounts to 3 SQL queries (which is very slow), UNLESS anyone of you can suggest any idea any library from apache commons or from google code, so that I can select all records into ResultSet and get 3 arrays of data group by different fields from this single ResultSet. Am I'm missing some obvious and unexpected solution to this problem?

    Read the article

  • Python: mysqldb install error

    - by Grenko
    So i've been pulling my hair out trying to install the mysqldb package. When i run the build i get a long transcript of errors, heres just part of it, i would posit it all but its huge list of errors [rv@med240-183 MySQL-python-1.2.3c1]$ sudo python setup.py build [sudo] password for rv: running build running build_py copying MySQLdb/release.py -> build/lib.linux-i686-2.6/MySQLdb running build_ext building '_mysql' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i586 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -Dversion_info=(1,2,3,'gamma',1) -D__version__=1.2.3c1 -I/usr/include/mysql -I/usr/include/python2.6 -c _mysql.c -o build/temp.linux-i686-2.6/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -fasynchronous-unwind-tables -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC -DUNIV_LINUX _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory Any ideas?

    Read the article

  • top-k selection/merge

    - by tcurdt
    I have n sorted lists. These lists are quite long (300000+ tuples). Selecting the top 10 of the individual lists is of course trivial - they are right at the head of the lists. Where it gets more interesting is when I want the top 10 of all the sorted lists. The question is whether there is an algorithm to calculate the combined top 10 having the correct order while cutting off the long tail of the lists. The goal is to reduce the required space. And if there is: How does one find the limit where is is safe to cut? Note: The actual counts are not important. Only the order is.

    Read the article

  • Search sort by parameter match count in the query? PostgreSQL

    - by Ben Dauphinee
    I am working on a search query in PostgreSQL, and one of the things I do is sort my query results by the number of parameters matched. I have no clue how this can be done. Does anyone have a suggestion or solution? Table brand color type engine Ford Blue 4-door V8 Maserati Blue 2-door V12 Saturn Green 4-door V8 GM Yellow 1-door V4 Current Query SELECT brand FROM table WHERE color = 'Blue' or type = '4-door' or engine = 'V8' Result Should Be Ford (3 match) Saturn (2 match) Maserati (1 match)

    Read the article

  • Check value at insert

    - by ThreeFingerMark
    Hello, i have this three tables. Table: Item Columns: ItemID, Title, Content, NoChange (Date) Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID In the Item Table is a Field with NoChange, if this field = true no Tag is allowed to insert a ItemTag value with this ItemID. How can i check this in the insert? For Updates i have this Statement: UPDATE ItemTag SET TagID = ? where ItemID = ? AND TagID = ? AND exists ( select ItemID from Item where ItemID = ? AND NoChange is null)"); Thank you.

    Read the article

  • Team working on Drupal - Tips

    - by fighella
    I am working on a Drupal site with a few friends. Obviously we can Version control the code... but what do we do to keep each others databases in check? I have managed to get all the theming into files (contemplate etc), but ideally my views settings, menu settings would be in line also... (Not worried about Content either way as we're just building the framework) Any suggestions?

    Read the article

  • MySQL Table structure of thumb UP & DOWN for comments system ?

    - by Axel
    Hello, i already created a table for comments but i want to add the feature of thumb Up and Down for comments like Digg and Youtube, i use php & mysql and i'm wondering What's the best table scheme to implement that so comments with many likes will be on the top. This is my current comments table : comments(id,user,article,comment,stamp) Note: Only registred will be able to vote, so there isn't need to restrict the votes by IP Thanks

    Read the article

  • php connecting to mysql server(localhost) very slow

    - by Ahmad
    actually its little complicated: summary: the connection to DB is very slow. the page rendering takes around 10 seconds but the last statement on the page is an echo and i can see its output while the page is loading in firefox (IE is same). in google chrome the output becomes visible only when the loading finishes. loading time is approximately the same across browsers. on debugging i found out that its the DB connectivity that is creating problem. the DB was on another machine. to debug further. i deployed the DB on my local machine .. so now the DB connection is at 127.0.0.1 but the connectivity still takes long time. this means that the issue is with APACHE/PHP and not with mysql. but then i deployed my code on another machine which connects to DB remotely.and everything seems fine. basically the application uses couple of mod_rewrite.. but i removed all the .htaccess files and the slow connectivity issue remains.. i installed another APACHE on my machine and used default settings. the connection was still very slow. i added following statements to measure the execution time $stime = microtime(); $stime = explode(" ",$stime); $stime = $stime[1] + $stime[0]; // my code -- it involves connection to DB $mtime = microtime(); $mtime = explode(" ",$mtime); $mtime = $mtime[1] + $mtime[0]; $totaltime = ($mtime - $stime); echo $totaltime; the output is 0.0631899833679 but firebug Net panel shows total loading time of 10-11 seconds. same is the case with google chrome i tried to turn off windows firewall.. connectivity is still slow and i just can't quite find the reason.. i've tried multiple DB servers.. multiple apaches.. nothing seems to be working.. any idea of what might be the problem?

    Read the article

  • How to build a SQL statement when any combination of user input to the table is possible?

    - by Greg McNulty
    Example: the user fills in everything but the product name. I need to search on what is supplied, so in this case everything but productName= This example could be for any combination of input. Is there a way to do this? Thanks. $name = $_POST['n']; $cat = $_POST['c']; $price = $_POST['p']; if( !($name) ) { $name = some character to select all? } $sql = "SELECT * FROM products WHERE productCategory='$cat' and productName='$name' and productPrice='$price' "; EDIT Solution does not have to protect from attacks. Specifically looking at the dynamic part of it.

    Read the article

  • Can somebody suggest good source for IMS?

    - by Raja Reddy
    I would like to learn working with IMS, can somebody suggest me a good source? I'm not sure if it matters to say that I have quite good exposure and experience with INSYNC DB2 and QMF. So anything that can depict and explain the advantages and disadvantages over IMS would be really helpful. Thanks for your help beforehand..

    Read the article

  • mysql and indexes with more than one column

    - by clarkk
    How to use indexes with more than one column The original index has an index on block_id, but is it necesarry when it's already in the unique index with two column? Indexes with more than one column (a,b,c) you can search for a, b and c you can search for a and b you can search for a you can not search for a and c Does this apply to unique indexes too? table id block_id account_id name indexes origin PRIMARY KEY (`id`) UNIQUE KEY `block_id` (`block_id`,`account_id`) KEY `block_id` (`block_id`), KEY `account_id` (`account_id`), indexes alternative PRIMARY KEY (`id`) UNIQUE KEY `block_id` (`block_id`,`account_id`) KEY `account_id` (`account_id`),

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >