Search Results

Search found 21942 results on 878 pages for 'named query'.

Page 108/878 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • how do I paste text to a line by line text filter like awk, without having stdin echo to the screen?

    - by Barton Chittenden
    I have a text in an email on a windows box that looks something like this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same I want to extract the numbers, i.e. the first word on each line. I've got a terminal running bash open on my Linux box... If these were in a text file, I would do this: awk '{print $1}' mytextfile.txt I would like to paste these in, and get my numbers out, without creating a temp file. my naive first attempt looked like this: $ awk '{print $1}' 100 some random text 100 101 some more random text 101 102 lots of random text, all different 103 lots of random text, all the same 102 103 The buffering of stdin and stdout make a hash of this. I wouldn't mind if stdin all printed first, followed by all of stdout; this is what would happen if I were to paste into 'sort' for example, but awk and sed are a different story. a little more thought gave me this: open two terminals. Create a fifo file. Read from the fifo on one terminal, write to it on another. This does in fact work, but I'm lazy. I don't want to open a second terminal. Is there a way in the shell that I can hide the text echoed to the screen when I'm passing it in to a pipe, so that I paste this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same but see this? $ awk '{print $1}' 100 101 102 103

    Read the article

  • NamedPipeClientStream StreamReader problem in C++

    - by Chris Porter
    When reading from a NamedPipes server using the .net NamedPipeClientStream class I can only get the data on the first read in C++, every time it's just an empty string. In c# it works every time. pipeClient = gcnew NamedPipeClientStream(".", "Server_OUT", PipeDirection::In); try { pipeClient->Connect(); } catch(TimeoutException^ e) { // swallow } StreamReader^ sr = gcnew StreamReader(pipeClient); String^ temp; while (temp = sr->ReadLine()) { // = sr->ReadLine(); Console::WriteLine("Received from server: {0}", temp); } sr->Close();

    Read the article

  • named_scope or find_by_sql?

    - by keruilin
    I have three models: User Award Trophy The associations are: User has many awards Trophy has many awards Award belongs to user Award belongs to trophy User has many trophies through awards Therefore, user_id is a fk in awards, and trophy_id is a fk in awards. In the Trophy model, which is an STI model, there's a trophy_type column. I want to return a list of users who have been awarded a specific trophy -- (trophy_type = 'GoldTrophy'). Users can be awarded the same trophy more than once. (I don't want distinct results.) Can I use a named_scope? How about chaining them? Or do I need to use find_by_sql? Either way, how would I code it?

    Read the article

  • Testing performance of queries in mysl

    - by Unreason
    I am trying to setup a script that would test performance of queries on a development mysql server. Here are more details: I have root access I am the only user accessing the server Mostly interested in InnoDB performance The queries I am optimizing are mostly search queries (SELECT ... LIKE '%xy%') What I want to do is to create reliable testing environment for measuring the speed of a single query, free from dependencies on other variables. Till now I have been using SQL_NO_CACHE, but sometimes the results of such tests also show caching behaviour - taking much longer to execute on the first run and taking less time on subsequent runs. If someone can explain this behaviour in full detail I might stick to using SQL_NO_CACHE; I do believe that it might be due to file system cache and/or caching of indexes used to execute the query, as this post explains. It is not clear to me when Buffer Pool and Key Buffer get invalidated or how they might interfere with testing. So, short of restarting mysql server, how would you recommend to setup an environment that would be reliable in determining if one query performs better then the other?

    Read the article

  • Advanced Linq query using into

    - by dilbert789
    I have this query that someone else wrote, it's over my head so I'm looking for some direction. What is happening currently is that it's taking numbers where there is a goal and no history entered, or history and no goal, this screws up the calculations as both goal and history for the same item are required on each. The three tables involved are: KPIType Goal KPIHistory What I need: Need all rows from KPIType. Need all goals where there is a matching KPIHistory row (Goal.KPItypeID == KPIHistory.KPItypeID ) into results 1 Need all kpiHistory’s where there is a matching Goal row (Goal.KPItypeID == KPIHistory.KPItypeID ) into results 2 Current query: var query = from t in dcs.KPIType.Where(k => k.ID <= 23) join g in dcs.Goal.Where(g => g.Dealership.ID == dealershipID && g.YearMonth >= beginDate && g.YearMonth <= endDate ) on t.ID equals g.KPITypeID into results1 join h in dcs.KPIHistory.Where(h => h.Dealership.ID == dealershipID && h.ForDate >= beginDate && h.ForDate <= endDate ) on t.ID equals h.KPIType.ID into results2 orderby t.DisplayOrder select new { t, Goal = results1, KPIHistory = results2 }; query.ToList().ForEach(q => { results.Add(q.t); }); Thanks, I'm happy to answer questions if more info needed.

    Read the article

  • Rails Active Record Mysql find query HAVING clause

    - by meetraghu28
    Is there a way to use the HAVING clause in some other way without using group by. I am using rails and following is a sample sccenario of the problem that i am facing. In rails you can use the Model.find(:all,:select,conditions,:group) function to get data. In this query i can specify a having clause in the :group param. But what if i dont have a group by clause but want to have a having clause in the result set. Ex: Lets take a query select sum(x) as a,b,c from y where "some_conditions" group by b,c; This query has a sum() aggregation on one of the fields. No if there is nothing to aggregate then my result should be an empty set. But mysql return a NULL row. So this problem can be solved by using select sum(x) as a,b from y where "some_conditions" group by b having a NOT NULL; but what happens in case i dont have a group by clause?? a query like below select sum(x) as a,b from y where "some_conditions"; so how to specify that sum(x) should not be NULL? Any solution that would return an empty set in this case instead of a NULL row will help and also that solution should be doable in rails. We can use subqueries to get this condition working with sumthin like this select * from ((select sum(x) as b FROM y where "some_condition") as subq) where subq.b is not null; but is there a better way to do this thru sql/rails ??

    Read the article

  • How transform this find_by_sql to named_scope?

    - by keruilin
    How can I possibly turn into named_scope? def self.hero_badge_awardees return User.find_by_sql("select users.*, awards.*, badges.badge_type from users, awards, badges where awards.user_id = users.id and badges.id = awards.badge_id and badges.badge_type = 'HeroBadge'") end

    Read the article

  • Parse numbers in singe textbox for query

    - by Joshua Slocum
    I’ve built a webform in Visual Web Developer Express 2008 to help me with my work. I use a webform to run query requests that are emailed to me. The inputs are in this format 12312 12312 12312 12312 12312 12312 12312 12312 I enter the first number in a textbox and the second number in another textbox and click a button that runs a query and returns the results in a gridview(single row). string strConn, strSQL; strConn = AppConfig.Connection strSQL = 'select fields from table where FirstNum=:FirstNum and SecondNum=:SecondNum'; using (OracleConnection cn = new OracleConnection(strConn)) { OracleCommand cmd = new OracleCommand(strSQL, cn); cmd.Parameters.AddWithValue(":FirstNum", txtFirstNum.Text); cmd.Parameters.AddWithValue(":SeconNum", txtSecondNum.Text); cn.Open(); using (OracleDataReader rdr = cmd.ExecuteReader()) { dgResults.DataSource = rdr; dgResults.DataBind(); } cn.Close(); } I had an idea to help me speed up my work. I’d like to be able to past both numbers in a single textbox ( like this 12312 12312 ) and have the code parse out the nubmers for the query. Or even better would be to past all of them in a multiline textbox like this 12312 12312 12312 12312 12312 12312 12312 12312 And have them all parsed and the query run for each line and the results all output to one gridview. I’m just not sure how to approach this. Any suggestions would be appreciated. Thank you.

    Read the article

  • Stored Procedure: Variable passed from PHP alters second half of query string

    - by Stephanie
    Hello everyone, Basically, we have an ASP website right now that I'm converting to PHP. We're still using the MSSQL server for the DB -- it's not moving. In the ASP, now, there's an include file with a giant sql query that's being executed. This include sits on a lot of pages and this is a simple version of what happens. Pages A, B and C all use this include file to return a listing. In ASP, Page A passes through variable A to the include file - page B passes through variable B -- page C passes through variable C, and so on. The include file builds the SQL query like this: sql = "SELECT * from table_one LEFT OUTER JOIN table_two ON table_one.id = table_two.id" then adds (remember, ASP), based on the variable passed through from the parent page, Select Case sType Case "A" sql = sql & "WHERE LOWER(column_a) <> 'no' AND LTRIM(ISNULL(column_b),'') <> '' ORDER BY column_a Case "B" sql = sql & "WHERE LOWER(column_c) <> 'no' ORDER BY lastname, firstname Case "C" sql = sql & "WHERE LOWER(column_f) <> 'no' OR LOWER(column_g) <> 'no' ORDER BY column_g As you notice, every string that's added on as the second part of the sql query is different than the previous; not just one variable can be substituted out, which is what has me stumped. How do I translate this case / switch into the stored procedure, based on the varchar input that I pass to the stored procedure via PHP? This stored procedure will actually handle a query listing for about 20 pages, so it's a hefty one and this is my first major complicated one. I'm getting there, though! I'm also just more used to MySQL, too. Not that they're that different. :P Thank you very much for your help in advance. Stephanie

    Read the article

  • insert Query is not executing, help me in tracking the problem

    - by Parth
    I tried the below query but it didnt executed giving error as : 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 1 INSERT INTO `jos_menu` SET params = 'orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', checked_out_time = '0000-00-00 00:00:00', ordering = '13', componentid = '20', published = '1', id = '152', menutype = 'accmenu', name = 'IPL', alias = 'ipl', link = 'index.php?option=com_content&view=archive', type = 'component') then i used mysql_real_escape_string() on the query containing variable which gives me the query as : INSERT INTO `jos_menu` SET params = \'orderby=\nshow_noauth=\nshow_title=\nlink_titles=\nshow_intro=\nshow_section=\nlink_section=\nshow_category=\nlink_category=\nshow_author=\nshow_create_date=\nshow_modify_date=\nshow_item_navigation=\nshow_readmore=\nshow_vote=\nshow_icons=\nshow_pdf_icon=\nshow_print_icon=\nshow_email_icon=\nshow_hits=\nfeed_summary=\npage_title=\nshow_page_title=1\npageclass_sfx=\nmenu_image=-1\nsecure=0\n\n\', checked_out_time = \'0000-00-00 00:00:00\', ordering = \'13\', componentid = \'20\', published = \'1\', id = \'152\', menutype = \'accmenu\', name = \'IPL\', alias = \'ipl\', link = \'index.php?option=com_content&view=archive\', type = \'component\') And on executing the above query I get an error as : 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '\'orderby=\nshow_noauth=\nshow_title=\nlink_titles=\nshow_intro=\nshow_section=\' at line 1 Can Someone guide me to track the problem in it? Thanks In Advance....

    Read the article

  • How to filter results by multiple fields?

    - by hadees
    I am working on a survey application in ruby on rails and on the results page I want to let users filter the answers by a bunch of demographic questions I asked at the start of the survey. For example I asked users what their gender and career was. So I was thinking of having dropdowns for gender and career. Both dropdowns would default to all but if a user selected female and marketer then my results page would so only answers from female marketers. I think the right way of doing this is to use named_scopes where I have a named_scope for every one of my demographic questions, in this example gender and career, which would take in a sanitized value from the dropdown to use at the conditional but i'm unsure on how to dynamically create the named_scope chain since I have like 5 demographic questions and presumably some of them are going to be set to all.

    Read the article

  • Escaping query strings with wget --mirror

    - by Jeremy Banks
    I'm using wget --mirror --html-extension --convert-links to mirror a site, but I end up with lots of filenames in the format post.php?id=#.html. When I try to view these in a browser it fails, because the browser ignores the query string when loading the file. Is there any way to replace the ? character in the filenames with something else? The answer of --restrict-file-names=windows worked correctly. In conjunction with the flags --convert-links and --adjust-extension/-E (formerly named --html-extension, which also works but is deprecated) it produces a mirror that behaves as expected. wget --mirror --adjust-extension --convert-links --restrict-file-names=windows http://www.example

    Read the article

  • Getting table schema from a query

    - by Appu
    As per MSDN, SqlDataReader.GetSchemaTable returns column metadata for the query executed. I am wondering is there a similar method that will give table metadata for the given query? I mean what tables are involved and what aliases it has got. In my application, I get the query and I need to append the where clause programically. Using GetSchemaTable(), I can get the column metadata and the table it belongs to. But even though table has aliases, it still return the real table name. Is there a way to get the aliase name for that table? Following code shows getting the column metadata. const string connectionString = "your_connection_string"; string sql = "select c.id as s,c.firstname from contact as c"; using(SqlConnection connection = new SqlConnection(connectionString)) using(SqlCommand command = new SqlCommand(sql, connection)) { connection.Open(); SqlDataReader reader = command.ExecuteReader(CommandBehavior.KeyInfo); DataTable schema = reader.GetSchemaTable(); foreach (DataRow row in schema.Rows) { foreach (DataColumn column in schema.Columns) { Console.WriteLine(column.ColumnName + " = " + row[column]); } Console.WriteLine("----------------------------------------"); } Console.Read(); } This will give me details of columns correctly. But when I see BaseTableName for column Id, it is giving contact rather than the alias name c. Is there any way to get the table schema and aliases from a query like the above? Any help would be great!

    Read the article

  • Find out which row caused the error

    - by Felipe Fiali
    I have a big fat query that's written dynamically to integrate some data. Basically what it does is query some tables, join some other ones, treat some data, and then insert it into a final table. The problem is that there's too much data, and we can't really trust the sources, because there could be some errored or inconsistent data. For example, I've spent almost an hour looking for an error while developing using a customer's database because somewhere in the middle of my big fat query there was an error converting some varchar to datetime. It turned out to be that they had some sales dating '2009-02-29', an out-of-range date. And yes, I know. Why was that stored as varchar? Well, the source database has 3 columns for dates, 'Month', 'Day' and 'Year'. I have no idea why it's like that, but still, it is. But how the hell would I treat that, if the source is not trustable? I can't HANDLE exceptions, I really need that it comes up to another level with the original message, but I wanted to provide some more info, so that the user could at least try to solve it before calling us. So I thought about displaying to the user the row number, or some ID that would at least give him some idea of what record he'd have to correct. That's also a hard job because there will be times when the integration will run up to 80000 records. And in an 80000 records integration, a single dummy error message: 'The conversion of a varchar data type to a datetime data type resulted in an out-of-range datetime value' means nothing at all. So any idea would be appreciated. Oh I'm using SQL Server 2005 with Service Pack 3.

    Read the article

  • QLocalSocket and QLocalServer in browser plugins

    - by kambamsu
    Hi, I have a simple doubt. Does the ipc mechanism in qt work when we use it for developing browser plugins? The reason i ask this is that I can easily get the QLocalSocket and QLocalServer communication to work in a qt application, but when i write a similar piece of code in a browser plugin dll i see that the server does not accept a new connection at all. This is what i do in the server: server = new QLocalServer(this); if( !server->listen("myServer")) { writeFile("Listen failed"); } connect(server, SIGNAL(newConnection()), this, SLOT(handleConn()),Qt::QueuedConnection); and this is what i do in the client: client = new QLocalSocket(this); client->abort(); QObject::connect(client,SIGNAL(connected()),this,SLOT(connClient()),Qt::QueuedConnection); client->connectToServer("myServer"); after i call connectToServer, my client emits the connected() signal and the connClient() slot is called. But, on the server side, there is no signal emitted. It doesn't seem to be receiving any connection at all. Any help would be appreciated. Thanks

    Read the article

  • Error using to_char // to_timestamp

    - by pepersview
    Hello, I have a database in PostgreSQL and I'm developing an application in PHP using this database. The problem is that when I execute the following query I get a nice result in phpPgAdmin but in my PHP application I get an error. The query: SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('2010-4-12', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '2010-4-12') ORDER BY t.t_name ASC And this is the error in PHP operator does not exist: text = integer (to_timestamp('', 'YYYY-MM-DD'),'FMD') = (courset... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The purpose to solve this error is to use the ORIGINAL query in php with : $date = "2010"."-".$selected_month."-".$selected_day; SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('$date', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '$date') ORDER BY t.t_name ASC

    Read the article

  • sql query --need some suggestions

    - by benjamin button
    I have a table with list of cycle codes.CYCLE_DEFINITION. each and every cycle_code has 12 months entries in another table(PM1_CYCLE_STATE). Each and every month has a cycle_start_date and a cycle_close_date. i will check with a particular date(lets say sysdate) and check what is the current month of every cycle.additionally i will also get the list of future 3 more months of that particular cycle. the query i have written is as below: SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,LTRIM(pcs.cycle_month,'0')+0 CM, pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+1,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+1) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+2,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+2) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+3,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+3) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') This query is running perfectly fine. This will result in all the cycle_codes with exactly 4 rows for current month and future 3 months. Now the requirement is if any of the month is missing.how could i show it? for eg: the output of the above query is cycd cm 102 1 102 10 102 11 102 12 103 1 103 10 103 11 103 12 104 1 104 10 104 11 104 12 Now lets say the row with cycd=104 and cm=11 is not present in the table,then the above query will not get the row 104 11. I want to display only those rows. how could i do it?

    Read the article

  • SQL Server - stored procedure suddenly become slow

    - by Barguast
    I have written a stored procedure that, yesterday, typically completed in under a second. Today, it takes about 18 seconds. I ran into the problem yesterday as well, and it seemed to be solved by DROPing and re-CREATEing the stored procedure. Today, that trick doesn't appear to be working. :( Interestingly, if I copy the body of the stored procedure and execute it as a straightforward query it completes quickly. It seems to be the fact that it's a stored procedure that's slowing it down...! Does anyone know what the problem might be? I've searched for answers, but often they recommend running it through Query Analyser, but I don't have have it - I'm using SQL Server 2008 Express for now. The stored procedure is as follows; ALTER PROCEDURE [dbo].[spGetPOIs] @lat1 float, @lon1 float, @lat2 float, @lon2 float, @minLOD tinyint, @maxLOD tinyint, @exact bit AS BEGIN -- Create the query rectangle as a polygon DECLARE @bounds geography; SET @bounds = dbo.fnGetRectangleGeographyFromLatLons(@lat1, @lon1, @lat2, @lon2); -- Perform the selection if (@exact = 0) BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.Filter([Location]) = 1) END ELSE BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.STIntersects([Location]) = 1) END END The 'POI' table has an index on MinLOD, MaxLOD, and a spatial index on Location.

    Read the article

  • SQL - Multiple join conditions using OR?

    - by Brandi
    I have a query that is using multiple joins. The goal is to say "Out of table A, give me all the customer numbers in which you can match table A's EmailAddress with either email_to or email_from of table B. Ignore nulls, internal emails, etc.". It seems like it would be better to use an or condition in the join than multiple joins since it is the same table. When I try to use AND/OR it does not give the behaviour I expect... AND finishes in a reasonable time, but yields no results (I know that there are matches, so it must be some flaw in my logic) and OR never finishes (I have to kill it). Here is example code to illustrate the question: --my original query SELECT DISTINCT a.CustomerNo FROM A a WITH (NOLOCK) LEFT JOIN B e WITH (NOLOCK) ON a.EmailAddress = e.email_from RIGHT JOIN B f WITH (NOLOCK) ON a.EmailAddress = f.email_to WHERE a.EmailAddress NOT LIKE '%@mydomain.___' AND a.EmailAddress IS NOT NULL AND (e.email_from IS NOT NULL OR f.email_to IS NOT NULL) Here is what I tried, (I am attempting logical equivalence): SELECT DISTINCT a.CustomerNo FROM A a WITH (NOLOCK) LEFT JOIN B e WITH (NOLOCK) ON a.EmailAddress = e.email_from OR a.EmailAddress = e.email_to WHERE a.EmailAddress NOT LIKE '%@mydomain.___' AND a.EmailAddress IS NOT NULL AND (e.email_from IS NOT NULL OR e.email_to IS NOT NULL) So my question is two-fold: Why does having AND in the above query work in a few seconds and OR goes for minutes and never completes? What am I missing to make a logically equivalent statement that has only one join?

    Read the article

  • postgresql help with php loop....

    - by KnockKnockWhosThere
    I keep getting an "Notice: Undefined index: did" error with this query, and I'm not understanding why... I'm much more used to mysql, so, maybe the syntax is wrong? This is the php query code: function get_demos() { global $session; $demo = array(); $result = pg_query("SELECT DISTINCT(did,vid,iid,value) FROM dv"); if(pg_num_rows($result) > 0) { while($r = pg_fetch_array($result)) { switch($r['did']) { case 1: $demo['a'][$r['vid']] = $r['value']; break; case 2: $demo['b'][$r['vid']] = $r['value']; break; case 3: $demo['c'][$r['vid']] = $r['value']; break; } } } else { $session->session_setMessage(2); } return $demo; } When I run that query at the pg prompt, I get results: "(1,1,1,"A")" "(1,2,2,"B")" "(1,3,3,"C")" "(1,4,4,"D")" "(1,5,5,"E")" "(1,6,6,"F")" "(1,7,7,"G")" "(1,8,8,"H")" "(1,9,9,"I")" "(1,10,A,"J")" "(1,11,B,"K")" "(1,12,C,"L")" "(1,13,D,"M")" "(2,14,1,"A")" "(2,15,2,"B")" "(2,16,0,"C")" "(3,17,1,"A")" "(3,18,2,"B")" "(3,19,3,"C")" "(3,20,4,"D")" "(3,21,5,"E")" "(3,22,6,"F")" "(3,23,7,"G")"

    Read the article

  • rails named_scope issue with eager loading

    - by Craig
    Two models (Rails 2.3.8): User; username & disabled properties; User has_one :profile Profile; full_name & hidden properties I am trying to create a named_scope that eliminate the disabled=1 and hidden=1 User-Profiles. Moreover, while the User model is usually used in conjunction with the Profile model, I would like the flexibility to be able specify this using the :include = :profile syntax. I have the following User named_scope: named_scope :visible, { :joins => "INNER JOIN profiles ON users.id=profiles.user_id", :conditions => ["users.disabled = ? AND profiles.hidden = ?", false, false] } This works as expected when just reference the User model: >> User.visible.map(&:username).flatten => ["user a", "user b", "user c", "user d"] However, when I attempt to include the Profile model: User.visible(:include=> :profiles).profile.map(&:full_name).flatten I get an error that reads: NoMethodError: undefined method `profile' for #<User:0x1030bc828> Am I able to cross model-collection boundaries in this manner?

    Read the article

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • How to store data in mysql, to get the fastest performance?

    - by Oden
    Hey, I'm thinking about it, witch of the following two query types would give me the fastest performance for a user messaging module inside my site: The first one i thought about is a multi table setup, witch has a connection table, and a main table. The connection table holds the connection between accounts, and the messaging table. In this case a query would look like following, to get some data of the author, and the messages he has sent: SELECT m.*, a.username FROM messages AS m LEFT JOIN connection_table ON (message_id = m.id) LEFT JOIN accounts AS a ON (account_id = a.id) WHERE m.id = '32341' Inserting into it is a little bit more "complicated". My other idea, and in my thought the better solution of this problem is that i store the data i would use in a connection table in the same table where is store the data of the mail. Sounds like i would get lots of duplicated entries, but no, because i have a field witch has text type and holds user ids like this: *24*32*249* If I want to query them, i use the mysql LIKE method. Deleting is an other problem, but for this i have one more field where i store who has deleted the post. Sad about that i don't know how to join this. So what would you recommend? Are there other ways?

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >