Search Results

Search found 17240 results on 690 pages for 'query'.

Page 542/690 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Assigning a MVC Controller property from Asp.Net page

    - by JasonMHirst
    I don't know if I've understanding MVC correctly if my question makes no sense, but I'm trying to understand the following: I have some code on a controller that returns JSON data. The JSON data is populated based on a choice from a dropdown box on an Asp.Net page. I thought (incorrectly) that Session variables would be shared between the Asp.Net project and the MVC Project. What I'd like to do therefore (if this is possible), is to call a Sub on the MVC that sets a variable before the JSON query is run. I have the following: Sub SetCountryID(ByVal CountryID As Integer) Me.pCountrySelectedID = CountryID End Sub Which I can call by the following: Response.Write("http://localhost:7970/Home/SetCountryID/?CountryID=44") But this then results in a blank page - again obviouslly totally incorrect! Am I going about MVC the wrong way or do I still have a hell of a lot more learning to do? Is this even possible to do?

    Read the article

  • MySQL/SQL: Update with correlated subquery from the updated table itself

    - by Roee Adler
    I have a generic question that I will try to explain using an example. Say I have a table with the fields: "id", "name", "category", "appearances" and "ratio" The idea is that I have several items, each related to a single category and "appears" several times. The ratio field should include the percentage of each item's appearances out of the total number of appearances of items in the category. In pseudo-code what I need is the following: For each category find the total sum of appearances for items related to it. For example it can be done with (select sum("appearances") from table group by category) For each item set the ratio value as the item's appearances divided by the sum found for the category above Now I'm trying to achieve this with a single update query, but can't seem to do it. What I thought I should do is: update Table T set T.ratio = T.appearances / ( select sum(S.appearances) from Table S where S.id = T.id ) But MySQL does not accept the alias T in the update column, and I did not find other ways of achieving this. Any ideas?

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

  • Problem with ranking of search results in SharePoint 2007 if using the CONTAINS predicate

    - by mythicdawn
    While writing a front-end for the SharePoint Search web service for work, I did some quick testing with the MOSS Search Tool to make sure things were working right under the hood. What I found was that queries composed only of CONTAINS predicates (FREETEXT ones were fine) would have a rank of 1000 for any results that were returned. According to the documentation (http://msdn.microsoft.com/en-us/library/ms544086.aspx): "If the query returns a document because a non–full-text predicate evaluates to TRUE for that document, the rank value is calculated as 1000." Given that the behaviour I am seeing seems to contradict the documentation, is it the case that all queries that use only the CONTAINS predicate will produce ranking like this?

    Read the article

  • How to merge data from two separate access 2007 databases

    - by DiegoMaK
    Hi, I have two identical databases with same structure, database a in computer a and database b in computer b. The data of database a*(a.accdb)* and database b*(b.accdb)* are different. then in database a i have for example ID:1, 2, 3 and in database B i Have ID:4,5,6 Then i need merge these databases data in only one database(a or b, doesn't matter) so the final database looks like. ID:1,2,3,4,5,6 I search an easy way to do this. because i have many tables. and do this by union query is so tedious. I search for example for a backup option for only data without scheme as in postgreSQl or many others RDBMS, but i don't see this options in access 2007. pd:only just table could be have duplicate values(I guess that pk doesn't allow copy a duplicate value and all others values will be copied well). if i wrong please correct me. thanks for your help.

    Read the article

  • CouchDB Map/Reduce raises execption in reduce function?

    - by fuzzy lollipop
    my view generates keys in this format ["job_id:1234567890", 1271430291000] where the first key element is a unique key and the second is a timestamp in milliseconds. I run my view with this elapsed_time?startkey=["123"]&endkey=["123",{}]&group=true&group_level=1 and here is my reduce function, the intention is to reduce the output to get the earliest and latest timestamps and return the difference between them and now function(keys,values,rereduce) { var now = new Date().valueOf(); var first = Number.MIN_VALUE; var last = Number.MAX_VALUE; if (rereduce) { first = Math.max(first,values[0].first); last = Math.min(last,values[0].last); } else { first = keys[0][0][1]; last = keys[keys.length-1][0][1]; } return {first:now - first, last:now - last}; } and when processing a query it constantly raises the following execption: function raised exception (new TypeError("keys has no properties", "", 1)) I am making sure not to reference keys inside my rereduce block. Why does this function constantly raise this exception?

    Read the article

  • Querying Two Tables At Once

    - by John
    Hello, I am trying to do what I believe is called a join query. First, in a MySQL table called "login," I want to look up what "loginid" is in the record where "username" equals $profile. (This will be just one record / row in the MySQL table). Then, I want to take that "loginid" and look up all rows / records in a different MySQL table called "submission," and pull data that have that "loginid." This could possibly be more than one record / row. How do I do this? The code below doesn't seem to work. Thanks in advance, John $profile = mysql_real_escape_string($_GET['profile']); $sqlStr = "SELECT l.username, l.loginid, s.loginid, s.submissionid, s.title, s.url, s.datesubmitted, s.displayurl FROM submission AS s, login AS l WHERE l.username = '$profile', s.loginid = l.loginid ORDER BY s.datesubmitted DESC";

    Read the article

  • Creating an Extremely Large Index in Oracle

    - by Rudiger
    Can someone look at the linked reference and explain to me the precise statements to run? Oracle DBA's Guide: Creating a Large Index Here's what I came up with... CREATE TEMPORARY TABLESPACE ts_tmp TEMPFILE 'E:\temp01.dbf' SIZE 10000M REUSE AUTOEXTEND ON EXTENT MANAGEMENT LOCAL; ALTER USER me TEMPORARY TABLESPACE ts_tmp; CREATE UNIQUE INDEX big_table_idx ON big_table ( record_id ); DROP TABLESPACE ts_tmp; Edit 1 After this index was created, I ran an explain plan for a simple query and get this error: ORA-00959: tablespace 'TS_TMP' does not exist It seems like it's not temporary at all... :(

    Read the article

  • How to instantly termainate an un-supervised script on demand?

    - by errr
    I have a GUI which resembles an interpreter. It allows the user to write a script in Jython (implementation of Python in Java) and run it whenever he wants. Apart from that, I also wish to allow the user to instantly terminate the run whenever he wants. Thing is, I don't really know how to do it. The script is being run on a different Thread, but I don't know of any secure way to stop/interrupt/terminate a thread in the middle of its run, let alone not knowing what is being run by the thread/script (it could be a simple task or maybe some sort of a heavy SQL query against a DB, and a DB is something which requires careful resource handling). How can I instantly terminate such run on demand?

    Read the article

  • Union on ValuesQuerySet in django

    - by Wuxab
    I've been searching for a way to take the union of querysets in django. From what I read you can use query1 | query2 to take the union... This doesn't seem to work when using values() though. I'd skip using values until after taking the union but I need to use annotate to take the sum of a field and filter on it and since there's no way to do "group by" I have to use values(). The other suggestions I read were to use Q objects but I can't think of a way that would work. Do I pretty much need to just use straight SQL or is there a django way of doing this? What I want is: q1 = mymodel.objects.filter(date__lt = '2010-06-11').values('field1','field2').annotate(volsum=Sum('volume')).exclude(volsum=0) q2 = mymodel.objects.values('field1','field2').annotate(volsum=Sum('volume')).exclude(volsum=0) query = q1|q2 But this doesn't work and as far as I know I need the "values" part because there's no other way for Sum to know how to act since it's a 15 column table.

    Read the article

  • php mysql database users connection handling

    - by aviv
    What is the best way to handle mysql database users connection in PHP? I have a web server running a PHP application on MySQL. I have created a database user for the application: dbuser1 with limited access - only for query, insert and update tables. No alter table. Now the question is, should i use the same dbuser1 widely in my scripts, so if there are 100 current people using my system and hence 100 scripts running parallel they all connect to the database with the same dbuser1? or should i create a few users and assign each script a different user or load-balance between the dbusers ?

    Read the article

  • Compare range of ip addresses with start and end ip address in MySQL

    - by Maarten
    I have a MySQL table where I store IP ranges. It is setup in the way that I have the start address stored as a long, and the end address (and an id and some other data). Now I have users adding ranges by inputting a start and end ip address, and I would like to check if the new range is not already (partially) in the database. I know I can do a between query, but that doesn't seem to work with 2 different columns, and I also cannot figure out how to pass a range to compare it. Doing it in a loop in PHP is a possibility, but would with a range of e.g. 132.0.0.0-199.0.0.0 be quite a big amount of queries..

    Read the article

  • SQL Server Import table keeping default values

    - by Chrissi
    I am importing a table from one database to another in SQL Server 2008 by right-clicking the target database and choosing Tasks Import Data... When I import the table I get the column names and types and all the data fine, but I lose the primary key, identity specifications and all the default values that were set in the source table. So now I have to set all the default values for each column again manually. Is there any way to get the default values with the import, or even after with a Query? I am VERY new to this and flailing in the dark, so forgive me if this is a really stupid question...

    Read the article

  • more ruby way of gsub from array

    - by aharon
    My goal is to let there be x so that x("? world. what ? you say...", ['hello', 'do']) returns "hello world. what do you say...". I have something that works, but seems far from the "Ruby way": def x(str, arr, rep='?') i = 0 query.gsub(rep) { i+=1; arr[i-1] } end Is there a more idiomatic way of doing this? (Let me note that speed is the most important factor, of course.)

    Read the article

  • What does ER_WARN_FIELD_RESOLVED mean?

    - by VolkerK
    When SHOW WARNINGS after a EXPLAIN EXTENDED shows a Note 1276 Field or reference 'test.foo.bar' of SELECT #2 was resolved in SELECT #1 what exactly does that mean and what impact does it have? In my case it prevents mysql from using what seems to be a perfectly good index. But it's not about fixing that specific query (as it is an irrelevant test). I found http://dev.mysql.com/doc/refman/5.0/en/error-messages-server.html butError: 1276 SQLSTATE: HY000 (ER_WARN_FIELD_RESOLVED) Message: Field or reference '%s%s%s%s%s' of SELECT #%d was resolved in SELECT #%d isn't much of an explaination.

    Read the article

  • Convert the code from PHP to Ruby

    - by theband
    public function getFtime() { $records=array(); $sql="SELECT * FROM `finishedtime`"; $result=mysql_query($sql); if(!$result){throw new Exception(mysql_error());} if(mysql_num_rows($result)==0){return $records;} while($row=mysql_fetch_assoc($result)){$records[]=$row;} return $records; } I am in the process of learning Ruby, can anyone convert this code into Ruby. This will make me construe on how to run a query and thrown the fetched result back.

    Read the article

  • Why would a TableAdapter populate a DataSet with "1/1/2000" for an entire timestamp column?

    - by Rob
    I have a TableAdapter filling a DataSet, and for some reason every select query populates my timestamp column with the value 1/1/2000 for every selected row. I first verified that original values are intact on the DB side; for the most part, they are, although it seems a few rows lost their original timestamp because of update queries performed programmatically before the issue was discovered. The DataColumn type is DateType, while the database (Postgres) column type is timestamp. Up until recently, this was all playing very nicely. I noticed the issue in a bound DataGridView control, and verified that this is not related to data binding by utilizing the 'Preview Data' option in the VS DataSet Editor. Usually when I notice unexpected values popping up in my application it's related to a mis-configured property, type conflict, or another silly mistake I've made. So after checking properties and types, and even recreating the TableAdapter from scratch, to say I'm a little baffled is an understatement. Does anyone have any ideas of what I could do to fix the issue and/or diagnose the cause?

    Read the article

  • CREATE USER in MS Access 2010

    - by Anakela
    I have been searching for several hours regarding how to create a user using SQL for a database I am building in Access. I found several sources on Microsoft's website that say I can use the CREATE USER command to do this. However, whenever I attempt to run the query, an error saying Syntax error in CREATE TABLE statement pops up. What am I doing wrong? Thank you in advance for your help! If you're interested, the code format I am attempting to use is as follows: CREATE USER username, password, pid.

    Read the article

  • Track mass email campaigns

    - by daeliur
    Litmus released an email analytics service last month (may 2010). See here: http://litmusapp.com/email-analytics They boast a very cool "read rate" tracking: they can track normal reads, Skims, and Glanced/Deleted. How can they track skims and glanced/deleted? This to me seems impossible :) They also track forwards and prints. Prints are easy (they include a css @media print query with a bg image). But forwards? I think this might be a combo between subsequent opens and different IPs/reffering URLs. However, this means that if I open my mail and re-read it from another computer, it counts as a forward. Any ideas on this one? To summarize: Litmus Email Analytics says they can track email reads, skims, glanced/deleted, prints and forwards. How do they do it (skims, glanced/deleted and forwards)?

    Read the article

  • Subquery works in 9i but not in 11g

    - by Zsuetam
    Statement below is working on Oracle 9i but not on Oracle 11g SELECT * FROM ( SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL UNION ALL SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL ) WHERE zz IS NOT NULL AND TO_CHAR (hh) NOT IN ( SELECT DECODE ( scrnfail_rate, 0, -1, ROUND (LEVEL * 1 / (scrnfail_rate / 100)) - ROUND (1 / (2 * (scrnfail_rate / 100))) ) AS nno FROM DUAL WHERE NVL (scrnfail_rate, 0) > 0 CONNECT BY LEVEL <= ROUND(9 * scrnfail_rate / 100) ) It looks like Oracle 11g is ignoring where decode or even where clause in the subquery. This query should return two rows as it does on Oracle 9i, but results ORA-01476: divisor is equal to zero on Oracle 11g EE 11.2.0.1.0 - 64bit. Can anyone help? Thanks!

    Read the article

  • How to make a thread try to reconnect to the Database x times using JDBCTemplate

    - by gillJ
    Hi, I have a single thread trying to connect to a database using JDBCTemplate as follows: JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource); try{ jdbcTemplate.execute(new CallableStatementCreator() { @Override public CallableStatement createCallableStatement(Connection con) throws SQLException { return con.prepareCall(query); } }, new CallableStatementCallback() { @Override public Object doInCallableStatement(CallableStatement cs) throws SQLException { cs.setString(1, subscriberID); cs.execute(); return null; } }); } catch (DataAccessException dae) { throw new CougarFrameworkException( "Problem removing subscriber from events queue: " + subscriberID, dae); } I want to make sure that if the above code throws DataAccessException or SQLException, the thread waits a few seconds and tries to re-connect, say 5 more times and then gives up. How can I achieve this? Also, if during execution the database goes down and comes up again, how can i ensure that my program recovers from this and continues running instead of throwing an exception and exiting?

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • can oracle types be updated like tables?

    - by Omnipresent
    I am converting GTT's to oracle types as explained in an excellent answer by APC. however, some GTT's are being updated based on a select query from another table. For example: UPDATE my_gtt_1 c SET (street, city, STATE, zip) = (SELECT src.unit_address, src.unit_city, src.unit_state, src.unit_zip_code FROM (SELECT mbr.ROWID row_id, unit_address, RTRIM(a.unit_city) unit_city, RTRIM(a.unit_state) unit_state, RTRIM(a.unit_zip_code) unit_zip_code FROM table_1 b, table_2 a, my_gtt_1 mbr WHERE type = 'ABC' AND id = b.ssn_head AND a.h_id = b.h_id AND row_id >= v_start_row AND row_id <= v_end_row) src WHERE c.ROWID = src.row_id) WHERE state IS NULL OR state = ' '; if my_gtt_1 was not a global temporary table but an oracle collection type then is it possible to do updates this complex? Or in these cases we are better off using the global temporary table?

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >