Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 741/945 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • CouchDB Map/Reduce raises execption in reduce function?

    - by fuzzy lollipop
    my view generates keys in this format ["job_id:1234567890", 1271430291000] where the first key element is a unique key and the second is a timestamp in milliseconds. I run my view with this elapsed_time?startkey=["123"]&endkey=["123",{}]&group=true&group_level=1 and here is my reduce function, the intention is to reduce the output to get the earliest and latest timestamps and return the difference between them and now function(keys,values,rereduce) { var now = new Date().valueOf(); var first = Number.MIN_VALUE; var last = Number.MAX_VALUE; if (rereduce) { first = Math.max(first,values[0].first); last = Math.min(last,values[0].last); } else { first = keys[0][0][1]; last = keys[keys.length-1][0][1]; } return {first:now - first, last:now - last}; } and when processing a query it constantly raises the following execption: function raised exception (new TypeError("keys has no properties", "", 1)) I am making sure not to reference keys inside my rereduce block. Why does this function constantly raise this exception?

    Read the article

  • Problem with ranking of search results in SharePoint 2007 if using the CONTAINS predicate

    - by mythicdawn
    While writing a front-end for the SharePoint Search web service for work, I did some quick testing with the MOSS Search Tool to make sure things were working right under the hood. What I found was that queries composed only of CONTAINS predicates (FREETEXT ones were fine) would have a rank of 1000 for any results that were returned. According to the documentation (http://msdn.microsoft.com/en-us/library/ms544086.aspx): "If the query returns a document because a non–full-text predicate evaluates to TRUE for that document, the rank value is calculated as 1000." Given that the behaviour I am seeing seems to contradict the documentation, is it the case that all queries that use only the CONTAINS predicate will produce ranking like this?

    Read the article

  • How to make a thread try to reconnect to the Database x times using JDBCTemplate

    - by gillJ
    Hi, I have a single thread trying to connect to a database using JDBCTemplate as follows: JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource); try{ jdbcTemplate.execute(new CallableStatementCreator() { @Override public CallableStatement createCallableStatement(Connection con) throws SQLException { return con.prepareCall(query); } }, new CallableStatementCallback() { @Override public Object doInCallableStatement(CallableStatement cs) throws SQLException { cs.setString(1, subscriberID); cs.execute(); return null; } }); } catch (DataAccessException dae) { throw new CougarFrameworkException( "Problem removing subscriber from events queue: " + subscriberID, dae); } I want to make sure that if the above code throws DataAccessException or SQLException, the thread waits a few seconds and tries to re-connect, say 5 more times and then gives up. How can I achieve this? Also, if during execution the database goes down and comes up again, how can i ensure that my program recovers from this and continues running instead of throwing an exception and exiting?

    Read the article

  • Convert the code from PHP to Ruby

    - by theband
    public function getFtime() { $records=array(); $sql="SELECT * FROM `finishedtime`"; $result=mysql_query($sql); if(!$result){throw new Exception(mysql_error());} if(mysql_num_rows($result)==0){return $records;} while($row=mysql_fetch_assoc($result)){$records[]=$row;} return $records; } I am in the process of learning Ruby, can anyone convert this code into Ruby. This will make me construe on how to run a query and thrown the fetched result back.

    Read the article

  • Compare range of ip addresses with start and end ip address in MySQL

    - by Maarten
    I have a MySQL table where I store IP ranges. It is setup in the way that I have the start address stored as a long, and the end address (and an id and some other data). Now I have users adding ranges by inputting a start and end ip address, and I would like to check if the new range is not already (partially) in the database. I know I can do a between query, but that doesn't seem to work with 2 different columns, and I also cannot figure out how to pass a range to compare it. Doing it in a loop in PHP is a possibility, but would with a range of e.g. 132.0.0.0-199.0.0.0 be quite a big amount of queries..

    Read the article

  • CREATE USER in MS Access 2010

    - by Anakela
    I have been searching for several hours regarding how to create a user using SQL for a database I am building in Access. I found several sources on Microsoft's website that say I can use the CREATE USER command to do this. However, whenever I attempt to run the query, an error saying Syntax error in CREATE TABLE statement pops up. What am I doing wrong? Thank you in advance for your help! If you're interested, the code format I am attempting to use is as follows: CREATE USER username, password, pid.

    Read the article

  • more ruby way of gsub from array

    - by aharon
    My goal is to let there be x so that x("? world. what ? you say...", ['hello', 'do']) returns "hello world. what do you say...". I have something that works, but seems far from the "Ruby way": def x(str, arr, rep='?') i = 0 query.gsub(rep) { i+=1; arr[i-1] } end Is there a more idiomatic way of doing this? (Let me note that speed is the most important factor, of course.)

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • Track mass email campaigns

    - by daeliur
    Litmus released an email analytics service last month (may 2010). See here: http://litmusapp.com/email-analytics They boast a very cool "read rate" tracking: they can track normal reads, Skims, and Glanced/Deleted. How can they track skims and glanced/deleted? This to me seems impossible :) They also track forwards and prints. Prints are easy (they include a css @media print query with a bg image). But forwards? I think this might be a combo between subsequent opens and different IPs/reffering URLs. However, this means that if I open my mail and re-read it from another computer, it counts as a forward. Any ideas on this one? To summarize: Litmus Email Analytics says they can track email reads, skims, glanced/deleted, prints and forwards. How do they do it (skims, glanced/deleted and forwards)?

    Read the article

  • Repeatedly querying xml using python

    - by Jack
    I have some xml documents I need to run queries on. I've created some python scripts (using ElementTree) to do this, since I'm vaguely familiar with using it. The way it works is I run the scripts several times with different arguments, depending on what I want to find out. These files can be relatively large (10MB+) and so it takes rather a long time to parse them. On my system, just running: tree = ElementTree.parse(document) takes around 30 seconds, with a subsequent findall query only adding around a second to that. Seeing as the way I'm doing this requires me to repeatedly parse the file, I was wondering if there was some sort of caching mechanism I can use so that the ElementTree.parse computation can be reduced on subsequent queries. I realise the smart thing to do here may be to try and batch as many queries as possible together in the python script, but I was hoping there might be another way. Thanks.

    Read the article

  • LINQ-to-XML to DataGridView: Cannot edit fields -- How to fix?

    - by Pretzel
    I am currently doing LINQ-to-XML and populating a DataGridView with my query just fine. The trouble I am running into is that once loaded into the DataGridView, the values appear to be Un-editable (ReadOnly). Here's my code: var barcodes = (from src in xmldoc.Descendants("Container") where src.Descendants().Count() > 0 select new { Id = (string)src.Element("Id"), Barcode = (string)src.Element("Barcode"), Quantity = float.Parse((string)src.Element("Quantity").Attribute("value")) }).Distinct(); dataGridView1.DataSource = barcodes.ToList(); I read somewhere that the "DataGridView will be in ReadOnly mode when you use Anonymous types." But I couldn't find an explanation why or exactly what to do about it. Any ideas?

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • Subquery works in 9i but not in 11g

    - by Zsuetam
    Statement below is working on Oracle 9i but not on Oracle 11g SELECT * FROM ( SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL UNION ALL SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL ) WHERE zz IS NOT NULL AND TO_CHAR (hh) NOT IN ( SELECT DECODE ( scrnfail_rate, 0, -1, ROUND (LEVEL * 1 / (scrnfail_rate / 100)) - ROUND (1 / (2 * (scrnfail_rate / 100))) ) AS nno FROM DUAL WHERE NVL (scrnfail_rate, 0) > 0 CONNECT BY LEVEL <= ROUND(9 * scrnfail_rate / 100) ) It looks like Oracle 11g is ignoring where decode or even where clause in the subquery. This query should return two rows as it does on Oracle 9i, but results ORA-01476: divisor is equal to zero on Oracle 11g EE 11.2.0.1.0 - 64bit. Can anyone help? Thanks!

    Read the article

  • can oracle types be updated like tables?

    - by Omnipresent
    I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example: UPDATE my_gtt_1 c SET (street, city, STATE, zip) = (SELECT src.unit_address, src.unit_city, src.unit_state, src.unit_zip_code FROM (SELECT mbr.ROWID row_id, unit_address, RTRIM(a.unit_city) unit_city, RTRIM(a.unit_state) unit_state, RTRIM(a.unit_zip_code) unit_zip_code FROM table_1 b, table_2 a, my_gtt_1 mbr WHERE type = 'ABC' AND id = b.ssn_head AND a.h_id = b.h_id AND row_id >= v_start_row AND row_id <= v_end_row) src WHERE c.ROWID = src.row_id) WHERE state IS NULL OR state = ' '; if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?

    Read the article

  • Using C# Type as generic

    - by I Clark
    I'm trying to create a generic list from a specific Type that is retrieved from elsewhere: Type listType; // Passed in to function, could be anything var list = _service.GetAll<listType>(); However I get a build error of: The type or namespace name 'listType' could not be found (are you missing a using directive or an assembly reference?) Is this even possible or am I setting foot onto C# 4 Dynamic territory? As a background: I want to automatically load all lists with data from the repository. The code below get's passed a Form Model whose properties are iterated for any IEnum (where T inherits from DomainEntity). I want to fill the list with objects of the Type the list made of from the repository. public void LoadLists(object model) { foreach (var property in model.GetType() .GetProperties(BindingFlags.Public | BindingFlags.Instance | BindingFlags.SetProperty)) { if (IsEnumerableOfNssEntities(property.PropertyType)) { var listType = property.PropertyType.GetGenericArguments()[0]; var list = _repository.Query<listType>().ToList(); property.SetValue(model, list, null); } } }

    Read the article

  • Using RDL files in Web ReportViewer

    - by user54064
    I want to use a rdl file with the query information stored in it. I don't want to have to convert it to a rdlc file. I have an ASP.NET app that I want to show the report. I thought I would use a ReportViewer on my page and then have it use the rdl file. However, I get an error and in researching it appears that I have to convert the file to an rdlc file. I don't want to strip out the data contained in the report. How can I show the report to the user by running the rdl report?

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • IN statement performance in PostgreSQL (and in general)

    - by Vasil
    I know this has probably been asked before, but I can't find it with SO's search. Lets say i've TABLE1 and TABLE2, who should I expect the performance of a query such as this: SELECT * FROM TABLE1 WHERE id IN SUBQUERY_ON_TABLE2; as the number of rows in TABLE1 and TABLE2 grow and id is a primary key on TABLE1. Yes, I know using IN is such a n00b mistake, but TABLE2 has a generic relation (django generic relation) to multiple other tables so I can't think of another way to filter the data. At what (aproximate) ammount of rows in TABLE1 and TABLE2 should I expect to notice performance issues because of this? Will performance degrade linearly, exponentially etc. depending on the number of rows?

    Read the article

  • PL/SQL bulk collect into associative array with sparse key

    - by Dan
    I want to execute a SQL query inside PL/SQL and populate the results into an associative array, where one of the columns in the SQL becomes the key in the associative array. For example, say I have a table Person with columns PERSON_ID INTEGER PRIMARY KEY PERSON_NAME VARCHAR2(50) ...and values like: PERSON_ID | PERSON_NAME ------------------------ 6 | Alice 15 | Bob 1234 | Carol I want to bulk collect this table into a TABLE OF VARCHAR2(50) INDEX BY INTEGER such that the key 6 in this associative array has the value Alice and so on. Can this be done in PL/SQL? If so, how?

    Read the article

  • performance of parameterized queries for different db's

    - by tuinstoel
    A lot of people know that it is important to use parameterized queries to prevent sql injection attacks. Parameterized queries are also much faster in sqlite and oracle when doing online transaction processing because the query optimizer doesn't have to reparse every parameterized sql statement before executing. I've seen sqlite becoming 3 times faster when you use parameterized queries, oracle can become 10 times faster when you use parameterized queries in some extreme cases with a lot of concurrency. How about other db's like mysql, ms sql, db2 and postgresql? Is there an equal difference in performance between parameterized queries and literal queries?

    Read the article

  • How do you verify the correct data is in a data mart?

    - by blockcipher
    I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex. Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.

    Read the article

  • GAE transaction exceptions

    - by bach
    Hi, In this example IS the exception being thrown if ANY of the Table elements are being changed by another client OR only if the element that we changed has been changed by another client? Just to verify - the exception is thrown from the commit() isn't it? PersistenceManager pm = PMF.get().getPersistenceManager(); try { pm.currentTransaction().begin(); List<Row> Table = (List<Row>) pm.newQuery(query).execute(); Table.get(0).setReserved(true); // <----- we change only this element pm.currentTransaction().commit(); } catch (JDOCanRetryException ex) { pm.currentTransaction().rollback() // <----- if Table.get(1) was changed by another client do we get to this point??? }

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >