Search Results

Search found 17583 results on 704 pages for 'query analyzer'.

Page 403/704 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Is it right that Strophe.addHandler reads only first node from response?

    - by markcial
    I'm starting to learn strophe library usage and when i use addHandler to parse response it seems to read only first node of xml response so when i receive a xml like that : <body xmlns='http://jabber.org/protocol/httpbind'> <presence xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='avaliable' id='5593:sendIQ'> <status/> </presence> <presence xmlns='jabber:client' from='test@localhost' to='test2@localhost' xml:lang='en'> <status /> </presence> <iq xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='result'> <query xmlns='jabber:iq:roster'> <item subscription='both' name='test' jid='test@localhost'> <group>test group</group> </item> </query> </iq> </body> With the handler testHandler used like that : connection.addHandler(testHandler,null,"presence"); function testHandler(stanza){ console.log(stanza); } It only logs : <presence xmlns='jabber:client' from='test2@localhost' to='test2@localhost' type='avaliable' id='5593:sendIQ'> <status/> </presence> What i am missing? is it a right behaviour? Should i add more handlers to get the other stanzas? Thanks for advance

    Read the article

  • LINQ to SQL - How to efficiently do either an AND or an OR search for multiple criteria

    - by Dan Diplo
    I have an ASP.NET MVC site (which uses Linq To Sql for the ORM) and a situation where a client wants a search facility against a bespoke database whereby they can choose to either do an 'AND' search (all criteria match) or an 'OR' search (any criteria match). The query is quite complex and long and I want to know if there is a simple way I can make it do both without having to have create and maintain two different versions of the query. For instance, the current 'AND' search looks something like this (but this is a much simplified version): private IQueryable<SampleListDto> GetSampleSearchQuery(SamplesCriteria criteria) { var results = from r in Table where (r.Id == criteria.SampleId) && (r.Status.SampleStatusId == criteria.SampleStatusId) && (r.Job.JobNumber.StartsWith(criteria.JobNumber)) && (r.Description.Contains(criteria.Description)) select r; } I could copy this and replace the && with || operators to get the 'OR' version, but feel there must be a better way of achieving this. Does anybody have any suggestions how this can be achieved in an efficient and flexible way that is easy to maintain? Thanks.

    Read the article

  • How can I free all allocated memory at once?

    - by Tommy
    Here is what I am working with: char* qdat[][NUMTBLCOLS]; char** tdat[]; char* ptr_web_data; // Loop thru each table row of the query result set for(row_index = 0; row_index < number_rows; row_index++) { // Loop thru each column of the query result set and extract the data for(col_index = 0; col_index < number_cols; col_index++) { ptr_web_data = (char*) malloc((strlen(Data) + 1) * sizeof(char)); memcpy (ptr_web_data, column_text, strlen(column_text) + 1); qdat[row_index][web_data_index] = ptr_web_data; } } tdat[row_index] = qdat[col_index]; After the data is used, the memory allocated is released one at a time using free(). for(row_index = 0; row_index < number_rows; row_index++) { // Loop thru all columns used for(col_index = 0; col_index < SARWEBTBLCOLS; col_index++) { // Free memory block pointed to by results set array free(tdat[row_index][col_index]); } } Is there a way to release all the allocated memory at once, for this array? Thank You.

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • How do I include a frameset under CGI.pm?

    - by neversaint
    I want to have a cgi-script that does two things. Take the input from a form. Generate results base on the input values on a frame. I also want the frame to exist only after the result is generated/printed. Below is the simplified code of what I want to do. But somehow it doesn't work. What's the right way to do it? #!/usr/local/bin/perl use CGI ':standard'; print header; print start_html('A Simple Example'), h1('A Simple Example'), start_form, "What's your name? ",textfield('name'), p, "What's the combination?", p, checkbox_group(-name=>'words', -values=>['eenie','meenie','minie','moe'], -defaults=>['eenie','minie']), p, "What's your favorite color? ", popup_menu(-name=>'color', -values=>['red','green','blue','chartreuse']), p, submit, end_form, hr; if (param()) { # begin create the frame print <<EOF; <html><head><title>$TITLE</title></head> <frameset rows="10,90"> <frame src="$script_name/query" name="query"> <frame src="$script_name/response" name="response"> </frameset> EOF # Finish creating frame print "Your name is: ",em(param('name')), p, "The keywords are: ",em(join(", ",param('words'))), p, "Your favorite color is: ",em(param('color')), hr; } print end_html;

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • How do I get this sql to linq? Multiple groups

    - by Dwight T
    For a db person, LINQ can be frustrating. I need to convert the following SQL into Linq. SELECT COUNT(o.objectiveid), COUNT(distinct r.ReviewId), l.Abbreviation FROM Objective o JOIN Review r on r.ReviewId = o.ReviewId and r.ReviewPeriodId = 3 and r.IsDeleted = 0 JOIN Position p on p.PositionId = r.EmployeePositionId and p.DivisionId = 2 JOIN Location l on l.LocationId = p.LocationId GROUP BY l.Abbreviation The group by nested example might be the way I have to go, but not sure. Doing one group by I have used the following code: var query = from rev in db.Reviews .Where(r => r.IsDeleted == false && r.ReviewPeriodId == reviewPeriodId) from obj in db.Objectives .Where(o => o.ReviewId == rev.ReviewId && o.IsDeleted == false) from pos in db.Positions .Where(p => rev.EmployeePositionId == p.PositionId && p.IsDeleted == false && p.DivisionId == divisionId ) from loc in db.Locations .Where(l => pos.LocationId == l.LocationId) group loc by loc.Abbreviation into locgroup select new ReportResults { KeyId = 0, Description = locgroup.Key, Count = locgroup.Count() }; return query.ToList(); What is the correct way? Thanks

    Read the article

  • Codeigniter php activerecord orm limit and offset

    - by user2167174
    I am a bit stuck with this problem I have in phpactiverecord. What I am trying to do is a pagination so I need to limit and offset the query results. I am accessing all of the user posts like so: $user-post; How can I query this to limit and offset the results? Thanx in advance. Code: public function office() { if (!$this->session->userdata('username')) { redirect(base_url()); } $data = array(); $data['posts'] = []; $user = User::find('first', array('id' => $this->session->userdata('id'))); if ($user != null) { if ($user->post != null) { foreach ($user->post as $post) { $posts = array($post->name, $post->description, $post->date,'<a href="'.base_url().'Posts/edit/'.$post->id.'">Edit</a> <br /><a href="'.base_url().'Posts/delete/'.$post->id.'">Delete</a>'); array_push($data['posts'], $posts); } $this->table->set_heading('Name', 'Description', 'Date', '<a href="'.base_url().'Posts/create">+Add</a>'); $tmpl = array('table_open' => '<table class="table table-stripped table-bordered user-posts">'); $this->table->set_template($tmpl); $data['table'] = $this->table->generate($data['posts']); } $this->load->view('template/header.php'); $this->load->view('Users/office.php', $data); $this->load->view('template/footer.php'); } else { redirect(base_url()); } }

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • Ordering by a max or a min from another table

    - by Paul Tomblin
    I have a table that consists of a unique id, and a few other attributes. It holds "schedules". Then I have another table that holds a list of all the times each schedule has or will "fire". This isn't the exact schema, but it's close: create table schedule ( id varchar(40) primary key, attr1 int, attr2 varchar(20) ); create table schedule_times ( id varchar(40) foreign key schedule(id), fire_date date ); I want to query the schedule table, getting the attributes and the next and previous fire_dates, in Java, sometimes ordering on one of the attributes, but sometimes ordering on either previous fire_date or the next fire_date. Ordering by the attributes is easy, I just stick an "order by" into the string while I'm building my prepared statement. I'm not even sure how to go about selecting the last fire_date and the next one in a single query - I know that I can find the next fire_date for a given id by doing a SELECT min(fire_date) FROM schedule_times WHERE id = ? AND fire_date > sysdate; and the similar thing for previous fire_date using max() and fire_date < sysdate. I'm just drawing a blank on how to incorporate that into a single select from the schedule so I can get both next and previous fire_date in one shot, and also how to order by either of those attributes.

    Read the article

  • Show users a list of unique items on Java Google App Engine

    - by James
    I've been going round in circles with what must be a very simple challenge but I want to do it the most efficient way from the start. So, I've watched Brett Slatkin's Google IO videos (2008 & 2009) about building scalable apps including http://www.youtube.com/watch?v=AgaL6NGpkB8 and read the docs but as a n00b, I'm still not sure. I'm trying to build an app on GAEJ similar to the original 'hotornot' where a user is presented with an item which they rate. Once they rate it, they are presented with another one which they haven't seen before. My question is this; is it most efficient to do a query up front to grab x items (say 100) and put them in a list (stored in memcache?) or is it better to simply make a query for a new item after each rating. To keep track of the items a user has seen, I'm planning to keep those items' keys in a list property of the user's entity. Does that sound sensible? I've really got myself confused about this so any help would be much appreciated.

    Read the article

  • Raising events and object persistence in Django

    - by Mridang Agarwalla
    Hi, I have a tricky Django problem which didn't occur to me when I was developing it. My Django application allows a user to sign up and store his login credentials for a sites. The Django application basically allows the user to search this other site (by scraping content off it) and returns the result to the user. For each query, it does a couple of queries of the other site. This seemed to work fine but sometimes, the other site slaps me with a CAPTCHA. I've written the code to get the CAPTCHA image and I need to return this to the user so he can type it in but I don't know how. My search request (the query, the username and the password) in my Django application gets passed to a view which in turn calls the backend that does the scraping/search. When a CAPTCHA is detected, I'd like to raise a client side event or something on those lines and display the CAPTCHA to the user and wait for the user's input so that I can resume my search. I would somehow need to persist my backend object between calls. I've tried pickling it but it doesn't work because I get the Can't pickle 'lock' object error. I don't know to implement this though. Any help/ideas? Thanks a ton.

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • I am not able to update form data to MySQL using PHP and jQuery

    - by Jimson Jose
    My problem is that I am unable to update the values entered in the form. I have attached all the files. I'm using MYSQL database to fetch data. What happens is that I'm able to add and delete records from form using jQuery and PHP scripts to MYSQL database, but I am not able to update data which was retrieved from the database. The file structure is as follows: index.php is a file with jQuery functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refreshing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - Delete is an function called from index.php to delete record from MySQL database (function calling delete.php works fine) - Update is an function called from index.php to update data using update-form.php by retriving specific record from MySQL table, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is written for MySQL) I have tried in many ways - at last I had figured out that data is not being transferred from update-form.php to update.php; there is a small problem in jQuery AJAX function where it is not transferring data to update.php page. Something is missing in calling update.php page it is not entering into that page. I am new bee in programming. I had collected this script from many forums and made this one. So I was limited in solving this problem. I came to know that this is good platform for me and many where we get a help to create new things. Please find the link below to download all files which is of 35kb (virus free assurance): download mysmallform files in ZIPped format, including mysql query

    Read the article

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • [XPATH] Retrieve specific preceding sibling nodes attributes

    - by Matthieu BROUILLARD
    Is there an XPath way of recovering directly one specific attribute of preceding sibling nodes of an XML node using an XPath query? In the following example, I would like to retrieve the values of the alt attribute of each img nodes that precede the div element marked with the id=marker. <content> <img alt="1" src="file.gif" /> <img alt="2" src="file.gif" /> <img alt="3" src="file.gif" /> <img alt="4" src="file.gif" /> <div id='marker'></div> </content> For this example, I want to retrieve the values 1 2 3 4. I use the following XPath query //div[@id='marker']/preceding-sibling::img in order to retrieve the node list I want <img alt="1" src="file.gif"/> <img alt="2" src="file.gif"/> <img alt="3" src="file.gif"/> <img alt="4" src="file.gif"/> As it is a node list I can then iterate on the nodes to retrieve the attribute value I am looking for, but is there an XPath way of doing it? I would have expected to be able to write something like: //div[@id='marker']/preceding-sibling::img@alt or //div[@id='marker']/preceding-sibling@alt::img but I don't even know if it is possible once you have used an XPath Axe like preceding-sibling.

    Read the article

  • SQL Server Collation / ADO.NET DataTable.Locale with different languages

    - by Turro
    Hi all, we have WinForms app which stores data in SQL Server (2000, we are working on porting it in 2008) through ADO.NET (1.1, working on porting to 4.0). Everything works fine if I read data previsouly written in Western-European locale (E.g.: "test", "test ù"), but now we have to be able to mix Western and non-Western alphabets as well (E.g.: "test - ???" - these are just random arabic chars). On the SQL Server side, database has been set with the Latin1_General collation, the field is a nvarchar(80). If I run a SQL SELECT statement (E.g.: "SELECT * FROM MyTable WHERE field = 'test - ???'", don't mind about the "*" or the actual names) from Query Analyzer, I get no results; the same happens if I pass the Sql statement to an ADO.NET DataAdapter to fill a DataTable. My guess is that it has something to do with collation, but I don't know how to correct this: do I have to change to collation (SQL Server) to a different one? Or do I have to set the locale on the DataAdaoter/DataTable (ADO.NET)? Thanks in advance to anyone who will help

    Read the article

  • Multiple inequality conditions (range queries) in NoSQL

    - by pableu
    Hi, I have an application where I'd like to use a NoSQL database, but I still want to do range queries over two different properties, for example select all entries between times T1 and T2 where the noiselevel is smaller than X. On the other hand, I would like to use a NoSQL/Key-Value store because my data is very sparse and diverse, and I do not want to create new tables for every new datatype that I might come across. I know that you cannot use multiple inequality filters for the Google Datastore (source). I also know that this feature is coming (according to this). I know that this is also not possible in CouchDB (source). I think I also more or less understand why this is the case. Now, this makes me wonder.. Is that the case with all NoSQL databases? Can other NoSQL systems make range queries over two different properties? How about, for example, Mongo DB? I've looked in the Documentation, but the only thing I've found was the following snippet in their docu: Note that any of the operators on this page can be combined in the same query document. For example, to find all document where j is not equal to 3 and k is greater than 10, you'd query like so: db.things.find({j: {$ne: 3}, k: {$gt: 10} }); So they use greater-than and not-equal on two different properties. They don't say anything about two inequalities ;-) Any input and enlightenment is welcome :-)

    Read the article

  • How to give alternating table rows different background colors using PHP

    - by Sam
    I have a table of data that is generated dynamically based on the contents stored in a mysql database. This is how my code looks: <table border="1"> <tr> <th>Name</th> <th>Description</th> <th>URL</th> </tr> <?php $query = mysql_query("SELECT * FROM categories"); while ($row = mysql_fetch_assoc($query)) { $catName = $row['name']; $catDes = $row['description']; $catUrl = $row['url']; echo "<tr class=''>"; echo "<td>$catName</td>"; echo "<td>$catDes</td>"; echo "<td>$catUrl</td>"; echo "</tr>"; } ?> </table> Now if the table was static, then I would just assign each alternating table row one of 2 styles in repeated order: .whiteBackground { background-color: #fff; } .grayBackground { background-color: #ccc; } and that would be the end of that. However since the table rows are dynamically generated, how can I achieve this?

    Read the article

  • Using LINQ on observable with GroupBy and Sum aggregate

    - by Mark Oates
    I have the following block of code which works fine; var boughtItemsToday = (from DBControl.MoneySpent bought in BoughtItemDB.BoughtItems select bought); BoughtItems = new ObservableCollection<DBControl.MoneySpent>(boughtItemsToday); It returns data from my MoneySpent table which includes ItemCategory, ItemAmount, ItemDateTime. I want to change it to group by ItemCategory and ItemAmount so I can see where I am spending most of my money, so I created a GroupBy query, and ended up with this; var finalQuery = boughtItemsToday.AsQueryable().GroupBy(category => category.ItemCategory); BoughtItems = new ObservableCollection<DBControl.MoneySpent>(finalQuery); Which gives me 2 errors; Error 1 The best overloaded method match for 'System.Collections.ObjectModel.ObservableCollection.ObservableCollection(System.Collections.Generic.List)' has some invalid arguments Error 2 Argument 1: cannot convert from 'System.Linq.IQueryable' to 'System.Collections.Generic.List' And this is where I'm stuck! How can I use the GroupBy and Sum aggregate function to get a list of my categories and the associated spend in 1 LINQ query?! Any help/suggestions gratefully received. Mark

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >