Search Results

Search found 18566 results on 743 pages for 'query hints'.

Page 608/743 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • zfdatagrid : how to set the relative order of columns?

    - by user522350
    To those of you who are familiar with zfdatagrid for the Zend framework: I got a recordset by a JOIN query, say from tables s and t, now I want to set the order in which the columns appear in the deployed table. For example, 5th column of table t to appear at the leftmost side, then 3rd column of table s, then 2nd column of table t, then 4th column of table s. How do I do this? Whatever I tried, it always shows first the columns of the left table of the JOIN, then the columns of the right table of the JOIN. I only know how to tell it which columns to show, but not their order. Thanks!

    Read the article

  • Sql: simultaneous aggregate from two tables

    - by Ash
    I have two tables: a Files table, which includes the file type, and a File Properties table, which references the file table via a foreign key. Sample Files table: | id | name | type | --------------------- | 1 | file1 | zip | | 2 | file2 | zip | | 3 | file3 | zip | | 4 | file4 | jpg | And the Properties table: | file_id | property | ----------------------- | 1 | x | | 2 | x | I want to make a query, which shows the count of each file type, and how many files of that type have a property. So in the example, the result would be | type | filecount | prop count | ---------------------------------- | zip | 3 | 2 | | jpg | 1 | 0 | I could accomplish this by select f.type, (select count(id) from files where type = f.type), count(fp.id) from files as f, file_properties as fp where f.id = fp.file_id group by f.type; But this seems very suboptimal and is very slow. Any better way to do this?

    Read the article

  • SQL statement HAVING MAX(some+thing)=some+thing

    - by Andreas
    I'm having trouble with Microsoft Access 2003, it's complaining about this statement: select cardnr from change where year(date)<2009 group by cardnr having max(time+date) = (time+date) and cardto='VIP' What I want to do is, for every distinct cardnr in the table change, to find the row with the latest (time+date) that is before year 2009, and then just select the rows with cardto='VIP'. This validator says it's OK, Access says it's not OK. This is the message I get: "you tried to execute a query that does not include the specified expression 'max(time+date)=time+date and cardto='VIP' and cardnr=' as part of an aggregate function." Could someone please explain what I'm doing wrong and the right way to do it? Thanks

    Read the article

  • LDAP "Insufficient Access"

    - by mon4goos
    I am trying to create an LDAP filter string. In each LDAP entry there is an attribute called "status" that has many values, some of which are of the regex form "[ab][0-9][1-9]". For example, "a20" or "b81". All other values for the "status" attribute are just alphabetical characters. I only want to let through entries that have a "status" value of the first form. When I construct an LDAP filter such as (status=a*) I get an "Insufficient Access" error. However, if I change the query to (status=a1*) that works fine. Is there any reason for this? If there behavior is unavoidable, can anyone thing of a way to get only the entries I want.

    Read the article

  • How can I pull data from a SQL Database that spans an academic year?

    - by Eric Reynolds
    Basically, I want to pull data from August to May for a given set of dates. Using the between operator works as long as I do not cross the year marker (i.e. BETWEEN 8 AND 12 works -- BETWEEN 8 AND 5 does not). Is there any way to pull this data? Here is the SQL Query I wrote: SELECT count(*), MONTH(DateTime) FROM Downloads WHERE YEAR(DateTime) BETWEEN 2009 AND 2010 AND MONTH(DateTime) BETWEEN 8 AND 5 GROUP BY MONTH(DateTime) ORDER BY MONTH(DateTime)" Any help is appreciated. Thanks, Eric R.

    Read the article

  • How to exclude results with get_object_or_404?

    - by googletorp
    In Django you can use the exclude to create SQL similar to not equal. An example could be. Model.objects.exclude(status='deleted') Now this works great and exclude is very flexible. Since I'm a bit lazy, I would like to get that functionality when using get_object_or_404, but I haven't found a way to do this, since you cannot use exclude on get_object_or_404. What I want is to do something like this: model = get_object_or_404(pk=id, status__exclude='deleted') But unfortunately this doesn't work as there isn't an exclude query filter or similar. The best I've come up with so far is doing something like this: object = get_object_or_404(pk=id) if object.status == 'deleted': return HttpResponseNotfound('text') Doing something like that, really defeats the point of using get_object_or_404, since it no longer is a handy one-liner. Alternatively I could do: object = get_object_or_404(pk=id, status__in=['list', 'of', 'items']) But that wouldn't be very maintainable, as I would need to keep the list up to date. I'm wondering if I'm missing some trick or feature in django to use get_object_or_404 to get the desired result?

    Read the article

  • Overriding unique indexed values

    - by Yeti
    This is what I'm doing right now (name is UNIQUE): SELECT * FROM fruits WHERE name='apple'; Check if the query returned any result. If yes, don't do anything. If no, a new value has to be inserted: INSERT INTO fruits (name) VALUES ('apple'); Instead of the above is it ok to insert the value into the table without checking if it already exists? If the name already exists in the table, an error will be thrown and if it doesn't, a new record will be inserted. Right now I am having to insert 500 records in a for loop, which results in 1000 queries. Will it be ok to skip the "already-exists" check?

    Read the article

  • Aggregate functions in ANSI SQL

    - by morpheous
    I want to use multiple aggregate functions in a query. All the examples i have seem on aggregate functions however, are trivial. Typically, they are of the form: SELECT field1,agg_func1, agg_func2 GROUP BY SOME_COLUMNS HAVING agg_func1 OP SOME_SCALAR Where: OP: is a boolean operator (e.g. <, = etc) SOME_SCALAR: is a scalar (i.e. a constant number) What I want to know is if it is possible to write (IN ANSI SQL) queries like: SELECT field1,agg_func1, agg_func2, agg_func3 GROUP BY SOME_COLUMNS HAVING (agg_func1 OP1 agg_func2) OP2 (agg_func2 OP3 agg_func3) Where: OP[N] are boolean operators or ANSI SQL clause operators like 'BETWEEN', 'LIKE', 'IN' etc. Also, assuming this is possible (I have not seen any documentation saying otherwise) are there any efficiency/performance considerations (i.e. penalties) when the HAVING clause consists of a boolean expression combining the output of the aggregate functions - instead of the normal comparison of the output of the aggregate with a constant number (e.g. min('salary') 100 ) - which is often used in the most banal examples involving aggregate functions?

    Read the article

  • How to add condition on multiple-join table

    - by Jean-Philippe
    Hi, I have those two tables: client: id (int) #PK name (varchar) client_category: id (int) #PK client_id (int) category (int) Let's say I have those datas: client: {(1, "JP"), (2, "Simon")} client_category: {(1, 1, 1), (2, 1, 2), (3, 1, 3), (4,2,2)} tl;dr client #1 has category 1, 2, 3 and client #2 has only category 2 I am trying to build a query that would allow me to search multiple categories. For example, I would like to search every clients that has at least category 1 and 2 (would return client #1). How can I achieve that? Thanks!

    Read the article

  • NHibernate - define where condition

    - by t.kehl
    Hi. In my application the user can defines search-conditions. He can choose a column, set an operator (equals, like, greater than, less or equal than, etc.) and give in the value. After the user clicks on a button and the application should do a search on the database with the condition. I use NHibernate and ask me now, what is the efficientest way to do this with NHibernate. Should I create a query with it like (Column=Name, Operator=Like, Value=%John%) var a = session.CreateCriteria<Customer>(); a.Add(Restrictions.Like("Name", "%John%")); return a.List<Customer>(); Or should I do this with HQL: var q = session.CreateQuery("from Customer where " + where); return q.List<Customer >(); Or is there a more bether solution? Thanks for your help. Best Regards, Thomas

    Read the article

  • Resumable upload from Java client to Grails web application?

    - by dersteps
    After almost 2 workdays of Googling and trying several different possibilities I found throughout the web, I'm asking this question here, hoping that I might finally get an answer. First of all, here's what I want to do: I'm developing a client and a server application with the purpose of exchanging a lot of large files between multiple clients on a single server. The client is developed in pure Java (JDK 1.6), while the web application is done in Grails (2.0.0). As the purpose of the client is to allow users to exchange a lot of large files (usually about 2GB each), I have to implement it in a way, so that the uploads are resumable, i.e. the users are able to stop and resume uploads at any time. Here's what I did so far: I actually managed to do what I wanted to do and stream large files to the server while still being able to pause and resume uploads using raw sockets. I would send a regular request to the server (using Apache's HttpClient library) to get the server to send me a port that was free for me to use, then open a ServerSocket on the server and connect to that particular socket from the client. Here's the problem with that: Actually, there are at least two problems with that: I open those ports myself, so I have to manage open and used ports myself. This is quite error-prone. I actually circumvent Grails' ability to manage a huge amount of (concurrent) connections. Finally, here's what I'm supposed to do now and the problem: As the problems I mentioned above are unacceptable, I am now supposed to use Java's URLConnection/HttpURLConnection classes, while still sticking to Grails. Connecting to the server and sending simple requests is no problem at all, everything worked fine. The problems started when I tried to use the streams (the connection's OutputStream in the client and the request's InputStream in the server). Opening the client's OutputStream and writing data to it is as easy as it gets. But reading from the request's InputStream seems impossible to me, as that stream is always empty, as it seems. Example Code Here's an example of the server side (Groovy controller): def test() { InputStream inStream = request.inputStream if(inStream != null) { int read = 0; byte[] buffer = new byte[4096]; long total = 0; println "Start reading" while((read = inStream.read(buffer)) != -1) { println "Read " + read + " bytes from input stream buffer" //<-- this is NEVER called } println "Reading finished" println "Read a total of " + total + " bytes" // <-- 'total' will always be 0 (zero) } else { println "Input Stream is null" // <-- This is NEVER called } } This is what I did on the client side (Java class): public void connect() { final URL url = new URL("myserveraddress"); final byte[] message = "someMessage".getBytes(); // Any byte[] - will be a file one day HttpURLConnection connection = url.openConnection(); connection.setRequestMethod("GET"); // other methods - same result // Write message DataOutputStream out = new DataOutputStream(connection.getOutputStream()); out.writeBytes(message); out.flush(); out.close(); // Actually connect connection.connect(); // is this placed correctly? // Get response BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line = null; while((line = in.readLine()) != null) { System.out.println(line); // Prints the whole server response as expected } in.close(); } As I mentioned, the problem is that request.inputStream always yields an empty InputStream, so I am never able to read anything from it (of course). But as that is exactly what I'm trying to do (so I can stream the file to be uploaded to the server, read from the InputStream and save it to a file), this is rather disappointing. I tried different HTTP methods, different data payloads, and also rearranged the code over and over again, but did not seem to be able to solve the problem. What I hope to find I hope to find a solution to my problem, of course. Anything is highly appreciated: hints, code snippets, library suggestions and so on. Maybe I'm even having it all wrong and need to go in a totally different direction. So, how can I implement resumable file uploads for rather large (binary) files from a Java client to a Grails web application without manually opening ports on the server side?

    Read the article

  • Linq to NHibernate - How to include parent object and only certain child objects

    - by vakman
    Given a simplified model like the following: public class Enquiry { public virtual DateTime Created { get; set; } public virtual Sender Sender { get; set; } } public class Sender { public virtual IList<Enquiry> Enquiries { get; set; } } How can you construct a Linq to Nhibernate query such that it gives you back a list of senders and their enquiries where the enquiries meet some criteria. I have tried something like this: return session.Linq<Enquiry>() .Where(enquiry => enquiry.Created < DateTime.Now) .Select(enquiry => enquiry.Sender) In this case I get an InvalidCastException saying you can't cast type Sender to type Enquiry. Any pointers on how I can do this without using HQL?

    Read the article

  • MS SQL Server 2000 tables

    - by klork
    We currently have an MS SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: 1) Does anyone have any experience with a MS SQL Server (2000/2005) installation with millions of tables? 2) What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. 3) What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Storing SQL queries in Table in sql server

    - by Rohit
    We have multiple jobs in our system.These jobs are listed in a grid. We have 3 different user types (usertypeid 1,2,3). For each user listing is different and he can filter listing by selecting view from a dropdown. ViewName in the below table is the view which needs to be displayed. To achieve this functionality, a fellow developer has created the following table structure and stored sql fragments in SQLExpression in the below table. According to me the query should not be stored in database. What are the pros and cons of this approach and what are the available alternatives? JobListingViewID ViewName SQLExpression UserTypeID 3 All Jobs 1 = 1 3 4 Error Jobs JobStatusID IN ( 2 ) 1 5 Error Jobs JobStatusID IN ( 2 ) 2 6 Error Jobs JobStatusID IN ( 2 ) 3 7 Speech JobStatusID IN ( 1, 3, 8 ) 1

    Read the article

  • Use LINQ to SQL results inside SQL Server stored procedure

    - by ifwdev
    Note: I'm not trying to call a SQL Server stored proc using a L2SQL datacontext. I use LINQPad for some fairly complex "reporting" that takes L2SQL output saved to an Array and is processed further. For example, it's usually much easier to do multiple levels of grouping with LINQ to Objects instead of trying to optimize a T-SQL query to run in a reasonable amount of time. What would be the easiest way to take the end result of one of these "applications" and use that in a SQL Server 2008 stored proc? The idea is to use the data for a Reporting Services Report, rather than copying and pasting into Excel (manual labor). The reports need to be accessible on the report server (not using the Report Server control in an application). I could output CSV and read that somehow via command line exec, but that seems like a hack. Thanks for your help.

    Read the article

  • Zip Code to City/State and vice-versa in a database?

    - by Simucal
    I'm new to SQL and relational databases and I have what I would imagine is a common problem. I'm making a website and when each user submits a post they have to provide a location in either a zip code or a City/State. What is the best practice for handling this? Do I simply create a Zip Code and City and State table and query against them or are there ready made solutions for handling this? I'm using SQL Server 2005 if it makes a difference. I need to be able to retrieve a zip code given a city/state or I need to be able to spit out the city state given a zip code.

    Read the article

  • Removing object from NSMutableArray

    - by Ben Packard
    Just a small query... I stumbled across the following shortcut in setting up a for loop (shortcut compared to the textbook examples I have been using): for (Item *i in items){ ... } As opposed to the longer format: for (NSInteger i = 0; i < [items count]; i++){ ... } //think that's right If I'm using the shorter version, is there a way to remove the item currently being iterated over (ie 'i')? Or do I need to use the longer format?

    Read the article

  • SQL Server 2005 stored procedure error

    - by user1670625
    I have created a stored procedure of insert command for employee details in SQL Server 2005 in which one of the parameters is an image for which I have used varbinary as the datatype in the table.. But when I am adding that parameter in the stored procedure I am getting the following error- Implicit conversion from data type varchar to varbinary is not allowed. Use the CONVERT function to run this query. Stored procedure: ( @Employee_ID nvarchar(10)='', @Password nvarchar(10)='', @Security_Question nvarchar(50)='', @Answer nvarchar(50)='', @First_Name nvarchar(20)='', @Middle_Name nvarchar(20)='', @Last_Name nvarchar(20)='', @Employee_Type nvarchar(15)='', @Department nvarchar(15)='', @Photo varbinary(50)='' ) insert into Registration ( Employee_ID, Password, Security_Question, Answer, First_Name, Middle_Name, Last_Name, Employee_Type, Department, Photo ) values ( @Employee_ID, @Password, @Security_Question, @Answer, @First_Name, @Middle_Name, @Last_Name, @Employee_Type, @Department, @Photo ) Table structure: Column Name Data Type Allow Nulls Employee_ID nvarchar(10) Unchecked Password nvarchar(10) Checked Security_Question nvarchar(50) Checked Answer nvarchar(50) Checked First_Name nvarchar(20) Checked Middle_Name nvarchar(20) Checked Last_Name nvarchar(20) Checked Employee_Type nvarchar(15) Checked Department nvarchar(15) Checked Photo varbinary(50) Checked I am not getting what to do..can anyone give me some suggestion or solution? Thanks in advance.

    Read the article

  • Is it possible to create an efficient UDF alternative to Excel's CUBEVALUE function?

    - by bright
    We'd like to create a simpler alternative to Excel's CUBEVALUE function for retrieving data from an OLAP server. The details aren't critical, but briefly, our function will "know" the source connection and accept a very simple ticker-like parameter and a date, in place of CUBEVALUE's MDX-style parameters. This is for internal use within our firm, just FYI. However, Excel has optimized CUBEVALUE so that calls to the OLAP server are batched. Question: Is there a way to code the new function so that it can similarly batch calls rather than issue a separate query for each cell?

    Read the article

  • Sharing a database connection with included classes in a Sinatra application

    - by imightbeinatree
    I'm converting a part of a rails application to its own sinatra application. It has some beefy work to do and rather than have a million helps in app.rb, I've separated some of it out into classes. Without access to rails I'm rewriting finder several methods and needing access to the database inside of my class. What's the best way to share a database connection between your application and a class? Or would you recommend pushing all database work into its own class and only having the connection established there? Here is what I have in in app.rb require 'lib/myclass' configure :production do MysqlDB = Sequel.connect('mysql://user:password@host:port/db_name') end I want to access it in lib/myclass.rb class Myclass def self.find_by_domain_and_stub(domain, stub) # want to do a query here end end I've tried several things but nothing that seems to work well enough to even include as an example.

    Read the article

  • From me friends, know who is already using the app

    - by Toni Michel Caubet
    I got working to get all friends from 'me' user, like this: FB.api('/me/friends?fields=id,name,updated_time&date_format=U&<?=$access_token?>', {limit:3, function(response){ console.log('Friend name: '+response.data[0].name); } ); But i need to get if the user is in the app already or not, how can I alter the query to get an extra row in the object 'is_in_app' true/false? FB.api('/me/friends?fields=id,name,updated_time&date_format=U&<?=$access_token?>', {limit:3, function(response){ var text = 'is not in app'; if(response.data[0].is_in_app == true) text = 'is in app!!'; console.log('Friend name: '+response.data[0].name + ' ' + text); } ); How can i achieve this?

    Read the article

  • how to insert based on the date

    - by Gaolai Peng
    I have a table table1 (account, last_contact_date, insert_date), account and last_contact_date are primary keys. The insert_date is set with the time the recored being added by calling getdate(). I also have a temporary table #temp(account, last_contact_date) which I use to update the table1. Here are sample data: table1 account last_contact_date insert_date 1 2012-09-01 2012-09-28 2 2012-09-01 2012-09-28 3 2012-09-01 2012-09-28 #temp account last_contact_date 1 2012-09-27 2 2012-09-27 3 2012-08-01 The result table depends on the inserting date. If the date is 2012-09-28, the result will be table1 account last_contact_date insert_date 1 2012-09-27 2012-09-28 2 2012-09-27 2012-09-28 3 2012-09-01 2012-09-28 If the date is 2012-09-29, the result will be table1 account last_contact_date insert_date 1 2012-09-01 2012-09-28 2 2012-09-01 2012-09-28 3 2012-09-01 2012-09-28 1 2012-09-27 2012-09-29 2 2012-09-27 2012-09-29 Basically the rule is (1) if the inserting date is the same day, i will pick the lastest last_contact_date, otherwise, (2) if the last_contact_date is later than the current last_contact_date, I will insert a new one. How do I write a query for this insert?

    Read the article

  • Sending variables in URLs in PHP with echo

    - by alexpelan
    Hi all, I can't really find good guidelines through google searches of the proper way to escape variables in URLs. Basically I am printing out a bunch of results from a mysql query in a table and I want one of the entries in each row to be a link to that result's page. I think this is easy, that I'm just missing a apostrophe or backslash somewhere, but I can't figure it out. Here's the line that's causing the error: echo "<a href = \"movies.php/?movie_id='$row['movie_id']'\"> Who Owns It? </a> "; and this is the error I'm getting: Parse error: syntax error, unexpected T_ENCAPSED_AND_WHITESPACE, expecting T_STRING or T_VARIABLE or T_NUM_STRING If you could elaborate in your answers about general guidelines for working with echo and variables in urls, that would be great. Thanks.

    Read the article

  • Do I need to include all fields in my entity framework model

    - by Jim B
    Quick question for everyone: Do I need to include all the database table fields on my EF model? For example; I've created a sub-model that only deals with tblPayment and associated tables. Now, I need to write a LINQ query to get some information about items. I would typically get this by joining tblPayment to tblInvoice to tblInvoiceItem to finally tblOrderItem. I'm wondering if when I add in those other tables, do I need to include all the fields for tblInvoice and tblInvoiceItem? Ideally; I'd just like to keep the fields I'd need to join on, as that would limit the possibility of my sub-model breaking if other fields on those tables are modified/deleted. Can I do this?

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >