Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 197/285 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • pipelined function

    - by user289429
    Can someone provide an example of how to use parallel table function in oracle pl/sql. We need to run massive queries for 15 years and combine the result. SELECT * FROM Table(TableFunction(cursor(SELECT * FROM year_table))) ...is what we want effectively. The innermost select will give all the years, and the table function will take each year and run massive query and returns a collection. The problem we have is that all years are being fed to one table function itself, we would rather prefer the table function being called in parallel for each of the year. We tried all sort of partitioning by hash and range and it didn't help. Also, can we drop the keyword PIPELINED from the function declaration? because we are not performing any transformation and just need the aggregate of the resultset.

    Read the article

  • PHP - Frameworks, ORM, Encapsulation.

    - by Ian
    Programming languages/environments aside, are there many developers who are using a framework in PHP, ORM and still abide by encapsulation for the DAL/BLL? I'm managing a team of a few developers and am finding that most of the frameworks require me to do daily code inspection because my developers are using the built in ORM. Right now, I've been using a tool to generate the classes and CRUD myself, with an area for them to write additional queries/functions. What's been happening though, is they are creating vulnerabilities by not doing proper checks on data permission, or allowing the key fields to be manipulated in the form. Any suggestions, other than get a new team and a new language (I've seen Python/Ruby frameworks have the same issues).

    Read the article

  • HQL updates and domain objects

    - by CaptainAwesomePants
    I have what may be a pretty elementary Hibernate question. Do HQL (and/or Criteria) update queries cause updates to live domain objects? And do they automatically flush now-invalid domain objects from the first-level cache? Example: Player playerReference1 = session.get(Player.class,1); session.createQuery("update players set gold = 100").executeUpdate(); //Question #1 -- does playerReference1.getGold() now return 100? Player playerReference2 = session.get(Player.class,1); //Question #2 -- does playerReference2.getGold() return 100, or is it the same exact object? Should I make a practice of evicting all objects that are affected by an HQL update if there's a chance some code will need it later?

    Read the article

  • How to find duplicate values in SQL Server

    - by hgulyan
    Hi, I'm using SQL Server 2008. I have a table Customers customer_number int field1 varchar field2 varchar field3 varchar field4 varchar ... and a lot more columns, that doesn't matter for my queries. Column *customer_number* is pk. I'm trying to find duplicate values and some differences between them. Please, help me to find all rows that have same 1) field1, field2, field3, field4 2) only 3 columns are equal and one of them isn't (except rows from list 1) 3) only 2 columns equal and two of them aren't (except rows from list 1 and list 2) In the end, I'll have 3 tables with this results and additional groupId, which will be same for a group of similars (For example, for 3 column equals, rows that have 3 same columns equal will be a seperate group) Thank you.

    Read the article

  • Is it OK to allow users to query an OLTP SQL Server database with excel?

    - by user169867
    I have a SQL Server 2005 database used by several applications. Some users wish to query the database directly from excel. I can understand this, because it is a useful tool for adhoc queries and then getting the data in a format that's easily transmitted and manipulated by other users. My question is: Does Excel (say 2003/2007) do its querying in a way that won't cause concurency issues? Or is it done in such a way that a seperate datawarehouse database needs to be made to handle this scenario? Thanks for any advise.

    Read the article

  • Duplicate an AppEngine Query object to create variations of a filter without affecting the base quer

    - by Steve Mayne
    In my AppEngine project I have a need to use a certain filter as a base then apply various different extra filters to the end, retrieving the different result sets separately. e.g.: base_query = MyModel.all().filter('mainfilter', 123) Then I need to use the results of various sub queries separately: subquery1 = basequery.filter('subfilter1', 'xyz') #Do something with subquery1 results here subquery2 = basequery.filter('subfilter2', 'abc') #Do something with subquery2 results here Unfortunately 'filter()' affects the state of the basequery Query instance, rather than just returning a modified version. Is there any way to duplicate the Query object and use it as a base? Is there perhaps a standard Python way of duping an object that could be used? The extra filters are actually applied by the results of different forms dynamically within a wizard, and they use the 'running total' of the query in their branch to assess whether to ask further questions. Obviously I could pass around a rudimentary stack of filter criteria, but I'd rather use the Query itself if possible, as it adds simplicity and elegance to the solution.

    Read the article

  • HBase as web app backend

    - by NathanD
    Can anyone advise if it is a good idea to have HBase as primary data source for web-based application? My primary concern is HBase's response time to queries. Is it possible to have sub-second response? edit: more details about the app itself. Amount of data: ~500GB of text data, expect to reach 1TB soon Number of concurrent users using the app: up to 50 The app will be used to present reports about data stored in HBase, like how many times keyword "X" occured in last 24h. For ~80% of requests from that app I will know the exact key, 20% will be scans (I'm looking into HBase schema design related topics to make it run fast)

    Read the article

  • ResultSet and aggregation

    - by kachanov
    Ok, I admit my situation is special There is a data system that supports SQL-92 and JDBC interface However the SQL requets are pretty expensive, and in my application I need to retreive the same data multiple times and aggregate it ("group by") on different fields to show different dimensions of the same data. For example on one screen I have three tables that show the same set or records but aggregated by City (1st grid), by Population (2nd grid), by number of babies (3rd grid) This amounts to 3 SQL queries (which is very slow), UNLESS anyone of you can suggest any idea any library from apache commons or from google code, so that I can select all records into ResultSet and get 3 arrays of data group by different fields from this single ResultSet. Am I'm missing some obvious and unexpected solution to this problem?

    Read the article

  • Replace repeating character with array elements in PHP

    - by Will Croft
    I hope this is blindingly obvious: I'm looking for the fastest way to replace a repeating element in a string with the elements in a given array, e.g. for SQL queries and parameter replacement. $query = "SELECT * FROM a WHERE b = ? AND c = ?"; $params = array('bee', 'see'); Here I would like to replace the instances of ? with the corresponding ordered array elements, as so: SELECT * FROM a WHERE b = 'bee' and c = 'see' I see that this might be done using preg_replace_callback, but is this the fastest way or am I missing something obvious?

    Read the article

  • Question about mysql indexes on low to medium cardinality columns

    - by Kevin J
    I have a general question about the way that database indexing works, particularly in mysql. Let's say I have a table with a million rows with a column "ClientID" that is distributed relatively equally among 30 values. Thus, this column is very low cardinality (30) relative to the primary key (1 million). Now, I understand that you shouldn't create indexes on low cardinality fields. However, in this case, queries are only ever done with one of the 30 clientIDs. Thus, wouldn't creating an index on ClientID be helpful, as the search space is automatically reduced to 1/30th what it normally would be? Or is my understanding of how the index works flawed? Thanks

    Read the article

  • What is difference between Where and Join in linq ?

    - by Freshblood
    hello What is difference between of these 2 queries ? they are completely equal ? from order in myDB.OrdersSet from person in myDB.PersonSet from product in myDB.ProductSet where order.Persons_Id==person.Id && order.Products_Id==product.Id select new { order.Id, person.Name, person.SurName, product.Model,UrunAdi=product.Name }; and from order in myDB.OrdersSet join person in myDB.PersonSet on order.Persons_Id equals person.Id join product in myDB.ProductSet on order.Products_Id equals product.Id select new { order.Id, person.Name, person.SurName, product.Model,UrunAdi=product.Name };

    Read the article

  • Complicated conditional SQL query

    - by DevAno1
    I'm not even sure if it's possible but I need it for my Access database. So I have following db structure : Now I need to perform a query that takes category_id from my product and do the magic : - let's say product belongs to console (category_id is in table Console) - from console_types take type_id, where category_id == category_id - but if product belongs to console_game (category_id is in table console_game) - from console_game take game_cat_id, where category_id == category_id I'm not sure if mysql is capable of such thing. If not I'm really f&%ranked up. Maybe there is a way to split this into 2,3 separate queries ?

    Read the article

  • Database Design Question

    - by deniz
    Hi, I am designing a database for a project. I have a table that has 10 columns, most of them are used whenever the table is accessed, and I need to add 3 more rows; View Count Thumbs Up (count) Thumbs Down (Count) which will be used on %90 of the queries when the table is accessed. So, my question is that whether it is better to break the table up and create new table which will have these 3 columns + Foreign ID, or just make it 13 columns and use no joins? Since these columns will be used frequently, I guess adding 3 more columns is better, but if I need to create 10 more columns which will be used %90 of the time, should I add them as well, or create a new table and use joins? I am not sure when to break the table if the columns are used very frequently. Do you have any suggestions? Thanks in advance,

    Read the article

  • In MongoDB, how can I replicate this simple query using map/reduce in ruby?

    - by Matthew Rathbone
    Hi, So using the regular MongoDB library in Ruby I have the following query to find average filesize across a set of 5001 documents: avg = 0 total = collection.count() Rails.logger.info "#{total} asset creation stats in the system" collection.find().each {|row| avg += (row["filesize"] * (1/total.to_f)) if row["filesize"]} Its pretty simple, so I'm trying to do the same using map/reduce as a learning exercise. This is what I came up with: map = 'function(){emit("filesizes", {size: this.filesize, num: 1});}' reduce = 'function(k, vals){ var result = {size: 0, num: 0}; for(var x in vals) { var new_total = result.num + vals[x].num; result.num = new_total result.size = result.size + (vals[x].size * (vals[x].num / new_total)); } return result; }' @results = collection.map_reduce(map, reduce) However the two queries come back with two different results! What am I doing wrong?

    Read the article

  • VendInvoiceJour.InvoiceAccount <- VendTable.AccountNum relation

    - by vukis
    Hi. I have following situation: I need to join VendInvoiceJour.InvoiceAccount <- VendTable.AccountNum and take VendTable.Vendgroup. In all cases (queries,or even views) Dynamics ax joins tables VendInvoiceJour.OrderAccount<- VendTable.AccountNum not VendInvoiceJour.InvoiceAccount <- VendTable.AccountNum. I`m trying to use this kind of query: qBdSVendJour = element.query().dataSourceTable(tablenum(VendInvoiceJour)); qBdSVendTbl = qBdSVendJour.addDataSource(tablenum(VendTable)); qBdSVendTbl.relations(true); qBdSVendTbl.joinMode(JoinMOde::InnerJoin); qBdSVendTbl.fetchMode(QueryFetchMode::One2One); qBdSVendTbl.addLink(FieldNum(VendInvoiceJour,InvoiceAccount),FieldNum(VendTable,AccountNum));//(Dynamics ax automaticaly corrects InvoiceAccount to orderaccount in reports if trying this link in morphx)

    Read the article

  • PHP e-commerce site talking to internal database for stock / ordering?

    - by CitrusTree
    Hi. I'm working on an e-commerce site (either bespoke with PHP, or using Drupal/Ubercart), and I'd like to investigate the site interacting with an internal (filemaker) database we use to manage stock and orders. Currently we manually transfer orders from the web site to our own database, and the site does not check or record changes in stock. My plan to allow the 2 to interact is as follows: Make the internal database available externaly on a machine with a fixed IP Allow external access from the site only Connect to the internal database using ODBC (or similar) Use simple queries to check stock / record stock changes / record order details Am I missing something here as this sounds quite straight forward? Is there another solution I should be taking a look at? Thanks in advance for any help or comments.

    Read the article

  • Find overdrawn accounts in SQL

    - by mazzzzz
    Hey guys, I have a program that allows me to run queries against a large database. I have two tables that are important right now, Deposits and withdraws. Each contains a history of every user. I need to take each table, add up every deposit and withdraws (per user), then subtract the withdraws from the deposits. I then need to return every user whos result is negative (aka they withdrew more then they deposited). Is this possible in one query? Example: Deposit Table: |ID|UserName|Amount| |1 | Use1 |100.00| |2 | Use1 |50.00 | |3 | Use2 |25.00 | |4 | Use1 | 5.00 | WithDraw Table: |ID|UserName|Amount| |2 | Use2 | 5.00 | |1 | Use1 |100.00| |4 | Use1 | 5.00 | |3 | Use2 |25.00 | So then the result would output: |OverWithdrawers| | Use2 | Is this possible (I sure don't know how to do it)? Thanks for any help, Max

    Read the article

  • Android: making a custom ListView independent of adapters ?

    - by wei
    I am adding a local database as a cache to a remote web service in my android application to answer queries. I used ArrayAdapters before for list views to display the results from the web service. Now with a database cache, the result could be either a Cursor(from database) or a List(from web), which means the adapter can be CursorAdapter or ArrayAdapter too. Creating two adapters for one query doesn't seem to be a good idea. So I am wondering what would be the best way to refactor my current code to add this database feature? Thanks,

    Read the article

  • Regex for finding valid sphinx fields

    - by mlissner
    I'm trying to validate that the fields given to sphinx are valid, but I'm having difficulty. Imagine that valid fields are cat, mouse, dog, puppy. Valid searches would then be: @cat search terms @(cat) search terms @(cat, dog) search term @cat searchterm1 @dog searchterm2 @(cat, dog) searchterm1 @mouse searchterm2 So, I want to use a regular expression to find terms such as cat, dog, mouse in the above examples, and check them against a list of valid terms. Thus, a query such as: @(goat) Would produce an error because goat is not a valid term. I've gotten so that I can find simple queries such as @cat with this regex: (?:@)([^( ]*) But I can't figure out how to find the rest. I'm using python & django, for what that's worth.

    Read the article

  • Zend without Database

    - by dbemerlin
    Hi, i googled for an hour now but maybe my Google-Fu is just too weak, i couldn't find a solution. I want to create an application that queries a service via JSON requests (all data and backend/business logic is stored in the service). With plain PHP it's simple enough since i just make a curl request, json_decode the result and get what i need. This already works quite well. A request might look like this: Call http://service-host/userlist with body: {"logintoken": "123456-1234-5678-901234"} Get Result: { "status": "Ok", "userlist":[ {"name": "foo", "id": 1}, {"name": "bar", "id": 2} ] } Now we want to get that into the Zend Framework since it's a hobby project and we want to learn about Zend. The problem is that all information i could find use a Database. Is there even a way to create a Zend Project that does not use a Database? And how can i write a model that represents the actions instead of objects and object-relations?

    Read the article

  • What to read as a good intro and quickstart to aspect-oriented programming and metaprogramming?

    - by Ivan
    As I've found myself repeating myself a lot, writing very similar queries and classes for different entities (despite of doing strong object and relational normalisation), etc, I've came to an Idea that I could and should automate the most of this and write an engine which will compile simple declarative models I specify into all the code limiting my job to describe the task and and finally just customise the result as needed. As far as I know this is about metaprogramming and aspect-oriented programming. How do I get acquainted with modern tools available quickly so that I don't invent one more bicycle developing my own?

    Read the article

  • C# mysqlreader on same connection error

    - by dominiquel
    Hi, I must find a way to do this in C#, if possible... I must loop on my folder list (mysql table), and for each folder I instanciate I must do another query, but when I do this it says : "There is already an open DataReader associated with this Connection" and I am inside a mysqlreader loop already. Note that I have oversimplified the code just to show you, the fact is that I must do queries inside a mysqlreader loop, and it looks to be impossible as they are on the same connection? MySqlConnection cnx = new MySqlConnection(connexionString); cnx.Open(); MySqlCommand command= new MySqlCommand("SELECT * FROM folder WHERE folder_id = " + id, cnx); MySqlDataReader reader= commande.ExecuteReader(); while (reader.Read()) { this.folderList[this.folderList.Length] = new CFolder(reader.GetInt32"folder_id"), cnx); } reader.Close(); cnx.Close();

    Read the article

  • How to flush data in php and disconnect user but keep the script alive

    - by Rodrigo
    This is a trick question, while developing a php+ajax application i felt into some long queries, nothing wrong with them, but they could be done in background. I know that there's a way to just send a reply to user while throwing the real processing to another process by exec(), however it dosen't feels right for me, this might generate exploits and it's not pratical on making it compatible with virtual servers and cross platform. PHP offers the ob_* functions although they help on flushing the cache, but the user will keep connected until the script is running. I'm wondering if there's an alternate to exec to keep a script running after sending data to user and closing connection/thread with apache, or a less "dirty" way to have processing data sent to another script.

    Read the article

  • Retrieving data from MySQL in one SQL statement

    - by james.ingham
    Hi all, If I'm getting my data from Mysql like so: $result = $dbConnector->Query("SELECT * FROM branches, businesses WHERE branches.BusinessId = businesses.Id ORDER BY businesses.Name"); $resultNum = $dbConnector->GetNumRows($result); if($resultNum > 0) { for($i=0; $i < $resultNum; $i++) { $row = $dbConnector->FetchArray($result); // $row['businesses.Name']; // $row['branches.Name']; echo $row['Name']; } } Does anyone know how to print the field Name in businesses and how to print the name from branches? My only other alternative is to rename the fields or to call Mysql with two seperate queries. Thanks in advance

    Read the article

  • toplink prefixes table with TL_ while update operation

    - by Dewfy
    I have very simple named query on JPA (toplink ): UPDATE Server s SET s.isECM = 0 I don't carry about cache or validity of already preloaded entities. But database connection is performed from restricted account (only INSERT/UPDATE/DELETE). It is appeared that toplink on this query executes (and failed since TL_Server is not exists) very strange SQL: INSERT INTO TL_Server (elementId, IsECM) SELECT t0.ElementId, ? FROM Element t0, Server t1 WHERE ((t1.elementId = t0.ElementId) AND (t0.elementType = ?)) bind => [0, Server] What is this? How the simple UPDATE appears an INSERT? Why toplink queries TL_?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >