Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 124/188 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Getting user data from Active Directory using PL/SQL

    - by David Neale
    I had a discussion today regarding an Oracle procedure I wrote some time ago. I wanted to get 7500 user email addresses from Active Directory using PL/SQL. AD will return a maximum of 1000 rows and the LDAP provider used by Oracle will not support paging. Therefore, my solution was to filter on the last two characters of the sAMAccountName (*00,*01,*02...etc.). This results in 126 queries (100 for account names ending in digits, 26 for those ending in a letter...this was sufficient for my AD setup). The person I was speaking to (it was a job interview by the way) said he could have done it a better way, but he would not tell me what that method was. Could anybody hazard a guess at what this method was?

    Read the article

  • "lock request time out period exceeded" Error When Trying to See DB Hierarchies

    - by Lloyd Banks
    I have a DB that I can run basic queries (albeit much slower than normal) off of. When I try to see the hierarchy trees for tables, views, or procedures in SSMS Object Explorer, I get the "lock request time out period exceeded". My Report Server reports that run off of objects in this DB are no longer completing. Jobs associated with procedures stored on this DB also do not run. I tried using sp_who2 to find and kill all connections on the DB. This has not solved the problem. What is going on here? How can I resolve this?

    Read the article

  • Adding a MongoDB collection to Netbeans

    - by Saif Bechan
    In Netbeans I have an option to add my mysql databases to netbeans. This way I can easily browse and so small queries. Now I am working on a MongoDB project, and I want to know if it is possible to use the same functionality. I see that on the website of mongo there is a list of drivers, and I see that you can add drivers in netbeans. I do not know if the same thing, or if this can be used. I have tried google, but no luck. Anyone have an idea?

    Read the article

  • Has anyone used an object database with a large amount of data?

    - by Jon Kruger
    Object databases like MongoDB and db4o are getting lots of pub lately. Everyone that plays with them seems to love it. I'm guessing that they are dealing with about 640K of data in their sample apps. Has anyone tried to use an object database with a large amount of data (say, 50GB or more)? Are you able to still execute complex queries against it (like from a search screen)? How does it compare to your usual relational database of choice? I'm just curious. I want to take the object database plunge, but I need to know if it'll work on something more than a sample app.

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • Overriding unique indexed values

    - by Yeti
    This is what I'm doing right now (name is UNIQUE): SELECT * FROM fruits WHERE name='apple'; Check if the query returned any result. If yes, don't do anything. If no, a new value has to be inserted: INSERT INTO fruits (name) VALUES ('apple'); Instead of the above is it ok to insert the value into the table without checking if it already exists? If the name already exists in the table, an error will be thrown and if it doesn't, a new record will be inserted. Right now I am having to insert 500 records in a for loop, which results in 1000 queries. Will it be ok to skip the "already-exists" check?

    Read the article

  • Help needed in AdventureWorks in a sql query.

    - by vaibhav
    I was just playing with adventureworks database in sqlserver. I got stuck in a query. I wanted to Select all titles from HumanResources.Employee which are either 'Male' or 'Female' but not both. i.e if title Accountant is Male and Female both I want to leave that title. I need only those titles where Gender is either Male or Female. I have done this till yet. select distinct(title) from humanresources.employee where gender='M' select distinct(title) from humanresources.employee where gender='F' Probably a join between these two queries, would work. But If you have any other solution, please let me know. It is not a homework. :) Thanks in advance.

    Read the article

  • Exploring search options for PHP

    - by Joshua
    I have innoDB table using numerous foreign keys, but we just want to look up some basic info out of it. I've done some research but still lost. 1) How can I tell if my host has Sphinx installed already? I don't see it as an option for table storage method (i.e. innodb, myisam). 2) Zend_Search_Lucene, responsive enough for AJAX functionality of millions of records? 3) Mirror my innoDB with a myisam? Make every innodb transaction end with a write to the myisam, then use 1:1 lookups? How would I do this automagically? This should make MyISAM ACID-compliant and free(er) from corruption no? 4) PostgreSQL fulltext queries don't even look like SQL to me wtf, I don't have time to learn a new SQL syntax I need noob options 5) ???????????????????? This is high volume site on a decently-equipped VPS Thanks very much for any ideas.

    Read the article

  • Custom keys for Google App Engine models (Python)

    - by Cameron
    First off, I'm relatively new to Google App Engine, so I'm probably doing something silly. Say I've got a model Foo: class Foo(db.Model): name = db.StringProperty() I want to use name as a unique key for every Foo object. How is this done? When I want to get a specific Foo object, I currently query the datastore for all Foo objects with the target unique name, but queries are slow (plus it's a pain to ensure that name is unique when each new Foo is created). There's got to be a better way to do this! Thanks.

    Read the article

  • Is this a valid benefit of using embedded SQL over stored procedures?

    - by George
    Here's an argument for SPs that I haven't heard. Flamers, be gentle with the down tick, Since there is overhead associated with each trip to the database server, I would suggest that a POSSIBLE reason for placing your SQL in SPs over embedded code is that you are more insulated to change without taking a performance hit. For example. Let's say you need to perform Query A that returns a scalar integer. Then, later, the requirements change and you decide that it the results of the scalar is x that then, and only then, you need to perform another query. If you performed the first query in a SP, you could easily check the result of the first query and conditionally execute the 2nd SQL in the same SP. How would you do this efficiently in embedded SQL w/o perform a separate query or an unnecessary query? Here's an example: --This SP may return 1 or two queries. SELECT @CustCount = COUNT(*) FROM CUSTOMER IF @CustCount 10 SELECT * FROM PRODUCT Can this/what is the best way to do this in embedded SQL?

    Read the article

  • How can I loop over a query for a specific number of times that may be greater than the result?

    - by JS
    I need to loop over a query exactly 12 times to complete rows in a form but rarely will the query return 12 rows. The cfquery endRow attribute doesn't force the loop to keep going if the result is < 12. If it did that would be ideal to use something like cfloop query="myQuery" endRow="12"... The 2 options that I have now are to skip the loop and write out all 12 rows but that results in a lot of duplicate code (there are 20 columns), or do a query of queries for each row which seems like a lot of wasted processing. Thanks for any ideas.

    Read the article

  • MySQL query for initial filling of order column

    - by Sejanus
    Sorry for vague question title. I've got a table containing huge list of, say, products, belonging to different categories. There's a foreign key column indicating which category that particular product belongs to. I.e. in "bananas" row category might be 3 which indicates "fruits". Now I added additional column "order" which is for display order within that particular category. I need to do initial ordering. Since the list is big, I dont wanna change every row by hand. Is it possible to do with one or two queries? I dont care what initial order is as long as it starts with 1 and goes up. I cant do something like SET order = id because id counts from 1 up regardless of product category and order must start anew from 1 up for every different category.

    Read the article

  • Multi join query returns to many results and improperly matched

    - by Woot4Moo
    I have the following minimal schema in Oracle: http://sqlfiddle.com/#!4/c1ed0/14 The queries I have run yield too many results and this query: select cat.*, status.*, source.* from cats cat, status status, source source Left OUTER JOIN source source2 on source2.sourceid = 1 Right OUTER JOIN status status2 on status2.isStray =0 order by cat.name will yield incorrect results. What I am expecting is a table that looks like the following however I cannot seem to come up with the correct SQL. NAME AGE LENGTH STATUSID CATSOURCE ISSTRAY SOURCEID CATID Adam 1 25 null null null 1 2 Bill 5 1 null null null null null Charles 7 5 null null null null null Steve 12 15 1 1 1 1 1 In plain English what I am looking for is to return all known cats + their associated cat source + their cat status while retaining null values. The only information I will have is the source that I am curious about. I also only want the cats that have a status of either STRAY or UNKNOWN (null)

    Read the article

  • ORM on iPhone. More simple than CoreData.

    - by Alexander Babaev
    The question is rather simple. I know that there is SQLite. There is Core Data also. But I need something in between. More object-oriented than SQLite API and simplier than Core Data. Main points are: I need access to stored entities only by id. No queries required. I need to store items of a single type, it means that I can use only one table if I choose SQLite. I want automatic object-relational conversion. Or object-storage if the storage is not relational. I can use object archiving, but I have to implement things (NSArchiver). But I want to write some kind of class and get persistence automatically. As it can be done with Hibernate/ActiveRecord/Core Data/etc. Thanks.

    Read the article

  • Using Rails and Rspec, how do you test that the database is not touched by a method

    - by Will Tomlins
    So I'm writing a test for a method which for performance reasons should achieve what it needs to achieve without using SQL queries. I'm thinking all I need to know is what to stub: describe SomeModel do describe 'a_getter_method' do it 'should not touch the database' do thing = SomeModel.create something_inside_rails.should_not_receive(:a_method_querying_the_database) thing.a_getter_method end end end EDIT: to provide a more specific example: class Publication << ActiveRecord::Base end class Book << Publication end class Magazine << Publication end class Student << ActiveRecord::Base has_many :publications def publications_of_type(type) #this is the method I am trying to test. #The test should show that when I do the following, the database is queried. self.publications.find_all_by_type(type) end end describe Student do describe "publications_of_type" do it 'should not touch the database' do Student.create() student = Student.first(:include => :publications) #the publications relationship is already loaded, so no need to touch the DB lambda { student.publications_of_type(:magazine) }.should_not touch_the_database end end end So the test should fail in this example, because the rails 'find_all_by' method relies on SQL.

    Read the article

  • How to make django test framework read from live database?

    - by lfborjas
    I realize there's a similar question here, but this one has a different approach: I have a django app that does queries over data indexed with djapian ; I'd like to write unit tests for this app's search component, and, obviously, I'd need the django settings module and all connections with the database active, so the test runner that django provides seems ideal. however, the django testing framework creates a dummy database and I'd hate to dump all my data to a fixture and then index it (the tests would take forever!); My data isn't at risk because the tests would only read from the database, so, how could this be achieved? -I'm new at this whole unit testing thing, so the solution of writing a new test runner I read in that similar question doesn't enlighten me a bit, at least not without some details

    Read the article

  • Fulltext for innoDB? or a good solution for php app

    - by Joshua
    I have a table I want to run a fulltext search on, but it is currently innoDB and is using a lot of foreign keys for other kinds of queries. Should I make like a 1:1 "meta-data" table that is myisam for fulltext? Also I am reading some things that say that fulltext corrupts MySQL tables pretty randomly? I dunno, the articles are a couple years old, maybe they've fixed that in 5+? If not what's a good solution for searching? Zend_Lucene seems cool but slow, even with caching, for the client's large tables and autocomplete functionality et al.

    Read the article

  • MySQL Cluster data nodes - slow SELECTs

    - by Boyan Georgiev
    Hi to all. First off, I'm new to MySQL Cluster. This is my pain: I've managed to setup a MySQL Cluster with two data nodes, two SQL nodes and one management server. Everything works pretty well, except the following: my data nodes are spread across an intranet link which incurs latency into communications between the data nodes. Apparently, due to MySQL Cluster's internal partitioning schemes, when my PHP application pulls data from the cluster via SELECT queries, parts of the data are pulled from both data nodes. This makes the page appear onscreen REALLY slowly. If I bring one data node offline, the data can only be pulled from that single remaining data node, and thus, the final result (HTML output) appears on the screen in a very timely fashion. So, my question is this: can the data nodes/cluster be told to pull data from partitions stored only on a particular data node?

    Read the article

  • Making OR/M loosely coupled and abstracted away from other layers.

    - by Genuine
    Hi all. In an n-tier architecture, the best place to put an object-relational mapping (OR/M) code is in the data access layer. For example, database queries and updates can be delegated to a tool like NHibernate. Yet, I'd like to keep all references to NHibernate within the data access layer and abstract dependencies away from the layers below or above it. That way, I can swap or plug in another OR/M tool (e.g. Entity Framework) or some approach (e.g. plain vanilla stored procedure calls, mock objects) without causing compile-time errors or a major overhaul of the entire application. Testability is an added bonus. Could someone please suggest a wrapper (i.e. an interface or base class) or approach that would keep OR/M loosely coupled and contained in 1 layer? Or point me to resources that would help? Thanks.

    Read the article

  • Parsing HTML with XPath and PHP

    - by Peter
    Is there a way (using XPath and PHP) to do the following (WITHOUT external XSLT files)? Remove all tables and their contents Remove everything after the first h1 tag Keep only paragraphs (INCLUDING their inner HTML (links, lists, etc)) I received an XSLT answer here, but I'm looking for XPATH queries that don't require external files. Currently, I've got the HTML in question loaded into a SimpleXmlElement via: $doc = @DOMDocument::loadHTML($xml); $data = simplexml_import_dom($doc); Now I need help with: $data = $data->xpath('??????'); Been working with this one for several days to no avail. I really appreciate the help. Edit: I don't particularly care what's inside the paragraphs, as I can use strip_tags to eliminate what I don't want. All I need to do is to isolate the paragraphs from the rest of the source. I suppose a more specific, accurate requirement would be this: Return only paragraphs (and their html contents) that aren't contained in tables, and only before the first h1 tag

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • CakePHP: Using two tables for a single model

    - by mwaterous
    I'm just picking up development in CakePHP right now so forgive me if this seems obvious; it did to me when I first read about has, belongsTo, hasMany, etc. The problem is I would like to associate two tables with a single model, and was wondering if there was a way to configure this so that when CakePHP did it's queries it automatically performed a join on the two tables. I don't want to create a separate model for the second table as it is merely a meta information table - the master table will contain the primary information required, the meta table will be populated with secondary information that is not required and therefore may or may not be set for every row of the master table.

    Read the article

  • Compiled query using list of class objects in C#

    - by Sukan
    Hello , Can somebody help me out in creating compiled queries where input is to be a list of class objects? I have seen examples where Func<DataContext, somematchobject, IQueryable<T>> is created and compiled. But can I do something like Func<List<T>, matchObject, T>, and compile it? Basically I want an object(T) meeting certain conditions (as in matchObject) to be returned from a list of objects(List<T>). Will CompiledQuery.Compile help me in this? Please help me experts!!

    Read the article

  • MySQL.. search using Fulltext or using Like? What is better?

    - by user156814
    I'm working on a search feature for my application, I want to search all articles in the database. As of now, I'm using a LIKE in my queries, but I want to add a "Related Articles" feature, sort of like what SO has in the sidebar (which I see as a problem if I use Like). What's better to use for MySQL searching, Fulltext or Like... or anything else I might not know about? Also, I'm using the Kohana Framework, so If anybody knows an easy way to do fulltext matching using the query builder, I'd appreciate that. Thanks.

    Read the article

  • Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

    - by Tom
    From a script I sent a query like this thousands of times to my local database: update some_table set some_column = some_value I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times. I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since them, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message: Lock wait timeout exceeded; try restarting transaction It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >