Search Results

Search found 25175 results on 1007 pages for 'dispatch table'.

Page 310/1007 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • libxml2 on iPhone

    - by mellkord
    I'm trying to parse HTML file with libxml2. Usually this works fine, but not in this case: <p> <b>Titles</b> (Some Text) <table> <tr> <td valign="top"> …Something1... </td> <td align="right" valign="top"> …Something2... </td> </tr> </table> </p> I do this query to get the first <td> //p[b='Titles']/table/tr/td[0] but nothing is returned because libxml think that <table> tag is not a child of a tag <p> and following him. And finally the question WHY?

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • Javascript parentNode and innerHTML do not work properly in FireFox

    - by Uda
    Hello, I have a problem when using javascript in Firefox, here is the html: <HTML> <BODY> <TABLE align="center"> <TR><td><input type=button name=submit onclick="javascript:submitIt()" value="submit" align="center"></td></TR> <TD> <TABLE> <DIV STYLE="display:none position:relative;"> <FORM ID="formA" NAME="formA" METHOD="post" ACTION="b.html" TARGET="_blank"> <INPUT TYPE="hidden" NAME=aInput VALUE="1294457296"> </FORM> </DIV> </TABLE> </TD> </TABLE></BODY></HTML> and here's the javascript: <script language="Javascript"> function submitIt() { oForm = document.getElementById("formA"); strRequest = "<HTML><BODY>" + oForm.parentNode.innerHTML + "</BODY></HTML>"; newDoc = window.open("", "", ""); newDoc.document.write(strRequest); } Problem is when you click submit button in this html page, you'll get a new html with source: <HTML><BODY><form id="formA" name="formA" method="post" action="b.html" target="_blank"></form> <input name="aInput" value="1294457296" type="hidden"> </BODY></HTML> but it's supposed to be <HTML><BODY><form id="formA" name="formA" method="post" action="b.html" target="_blank"> <input name="aInput" value="1294457296" type="hidden"></form> </BODY></HTML> Could anyone please help please? it will be really appreciated. Thanks!

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • How to work threading with ConcurrentQueue<T>.

    - by dboarman
    I am trying to figure out what the best way of working with a queue will be. I have a process that returns a DataTable. Each DataTable, in turn, is merged with the previous DataTable. There is one problem, too many records to hold until the final BulkCopy (OutOfMemory). So, I have determined that I should process each incoming DataTable immediately. Thinking about the ConcurrentQueue<T>...but I don't see how the WriteQueuedData() method would know to dequeue a table and write it to the database. For instance: public class TableTransporter { private ConcurrentQueue<DataTable> tableQueue = new ConcurrentQueue<DataTable>(); public TableTransporter() { tableQueue.OnItemQueued += new EventHandler(WriteQueuedData); // no events available } public void ExtractData() { DataTable table; // perform data extraction tableQueue.Enqueue(table); } private void WriteQueuedData(object sender, EventArgs e) { BulkCopy(e.Table); } } My first question is, aside from the fact that I don't actually have any events to subscribe to, if I call ExtractData() asynchronously will this be all that I need? Second, is there something I'm missing about the way ConcurrentQueue<T> functions and needing some form of trigger to work asynchronously with the queued objects?

    Read the article

  • Please help optimizing a long running query (left outer join, with 2 subqueries)

    - by 46and2
    Hi all. The query I need help with is: SELECT d.bn, d.4700, d.4500, ... , p.`Activity Description` FROM ( SELECT temp.bn, temp.4700, temp.4500, .... FROM `tdata` temp GROUP BY temp.bn HAVING (COUNT(temp.bn) = 1) ) d LEFT OUTER JOIN ( SELECT temp2.bn, max(temp2.FPE) AS max_fpe, temp2.`Activity Description` FROM `pdata` temp2 GROUP BY temp2.bn ) p ON p.bn = d.bn; The ... represents other fields that aren't really important to solving this problem. The issue is on the the second subquery - it is not using the index I have created and I am not sure why, it seems to be because of the way TEXT fields are handled. The first subquery uses the index I have created and runs quite snappy, however an explain on the second shows a 'Using temporary; Using filesort'. Please see the indexes I have created in the below table create statements. Can anyone help me optimize this? By way of quick explanation the first subquery is meant to only select records that have unique bn's, the second, while it looks a bit wacky (with the max function there which is not being used in the result set) is making sure that only one record from the right part of the join is included in the result set. My table create statements are CREATE TABLE `tdata` ( `BN` varchar(15) DEFAULT NULL, `4000` varchar(3) DEFAULT NULL, `5800` varchar(3) DEFAULT NULL, .... KEY `BN` (`BN`), KEY `idx_t3010`(`BN`,`4700`,`4500`,`4510`,`4520`,`4530`,`4570`,`4950`,`5000`,`5010`,`5020`,`5050`,`5060`,`5070`,`5100`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 CREATE TABLE `pdata` ( `BN` varchar(15) DEFAULT NULL, `FPE` datetime DEFAULT NULL, `Activity Description` text, .... KEY `BN` (`BN`), KEY `idx_programs_2009` (`BN`,`FPE`,`Activity Description`(100)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Thanks!

    Read the article

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • PHP templating challenge (optimizing front-end templates)

    - by Matt
    Hey all, I'm trying to do some templating optimizations and I'm wondering if it is possible to do something like this: function table_with_lowercase($data) { $out = '<table>'; for ($i=0; $i < 3; $i++) { $out .= '<tr><td>'; $out .= strtolower($data); $out .= '</td></tr>'; } $out .= "</table>"; return $out; } NOTE: You do not know what $data is when you run this function. Results in: <table> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> <tr><td><?php echo strtolower($data) ?></td></tr> </table> General Case: Anything that can be evaluated (compiled) will be. Any time there is an unknown variable, the variable and the functions enclosing it, will be output in a string format. Here's one more example: function capitalize($str) { return ucwords(strtolower($str)); } If $str is "HI ALL" then the output is: Hi All If $str is unknown then the output is: <?php echo ucwords(strtolower($str)); ?> In this case it would be easier to just call the function (ie. <?php echo capitalize($str) ?> ), but the example before would allow you to precompile your PHP to make it more efficient

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • Need a SQL statement focus on combination of tables but entries always with uinque ID

    - by Registered User KC
    Hi All, I need SQL code to solve the tables combination problem, described on below: Table old data: name version status lastupdate ID A 0.1 on 6/8/2010 1 B 0.1 on 6/8/2010 2 C 0.1 on 6/8/2010 3 D 0.1 on 6/8/2010 4 E 0.1 on 6/8/2010 5 F 0.1 on 6/8/2010 6 G 0.1 on 6/8/2010 7 Table new data: name version status lastupdate ID A 0.1 on 6/18/2010 #B entry deleted C 0.3 on 6/18/2010 #version_updated C1 0.1 on 6/18/2010 D 0.1 on 6/18/2010 E 0.1 off 6/18/2010 #status_updated F 0.1 on 6/18/2010 G 0.1 on 6/18/2010 H 0.1 on 6/18/2010 #new_added H1 0.1 on 6/18/2010 #new_added the difference of new data and old date: B entry deleted C entry version updated E entry status updated C1/H/H1 entry new added What I want is always keeping the ID - name mapping relationship in old data table no matter how data changed later, a.k.a the name always has a unique ID number bind with it. If entry has update, then update the data, if entry is new added, insert to the table then give a new assigned unique ID. However, I can only use SQL with simple select or update statement then it may too hard for me to write such code, then I hope someone with expertise can give direction, no details needed on the different of SQL variant, a standard sql code as sample is enough. Thanks in advance! Rgs KC

    Read the article

  • Hibernate Bi- Directional many to many mapping advice!

    - by Rob
    hi all, i woundered if anyone might be able to help me out. I am trying to work out what to google for (or any other ideas!!) basically i have a bidirectional many to many mapping between a user entity and a club entity (via a join table called userClubs) I now want to include a column in userClubs that represents the role so that when i call user.getClubs() I can also work out what level access they have. Is there a clever way to do this using hibernate or do i need to rethink the database structure? Thank you for any help (or just for reading this far!!) the user.hbm.xml looks a bit like <set name="clubs" table="userClubs" cascade="save-update"> <key column="user_ID"/> <many-to-many column="activity_ID" class="com.ActivityGB.client.domain.Activity"/> </set> the activity.hbm.xml part <set name="members" inverse="true" table="userClubs" cascade="save-update"> <key column="activity_ID"/> <many-to-many column="user_ID" class="com.ActivityGB.client.domain.User"/> </set> The current userClubs table contains the fields id | user_ID | activity_ID I would like to include in there id | user_ID | activity_ID | role and be able to access the role on both sides...

    Read the article

  • Trying to insert a row using stored procedured with a parameter binded to an expression.

    - by Arvind Singh
    Environment: asp.net 3.5 (C# and VB) , Ms-sql server 2005 express Tables Table:tableUser ID (primary key) username Table:userSchedule ID (primary key) thecreator (foreign key = tableUser.ID) other fields I have created a procedure that accepts a parameter username and gets the userid and inserts a row in Table:userSchedule Problem: Using stored procedure with datalist control to only fetch data from the database by passing the current username using statement below works fine protected void SqlDataSourceGetUserID_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CurrentUserName"].Value = Context.User.Identity.Name; } But while inserting using DetailsView it shows error Procedure or function OASNewSchedule has too many arguments specified. I did use protected void SqlDataSourceCreateNewSchedule_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CreatedBy"].Value = Context.User.Identity.Name; } DetailsView properties: autogen fields: off, default mode: insert, it shows all the fields that may not be expected by the procedure like ID (primary key) not required in procedure and CreatedBy (user id ) field . So I tried removing the 2 fields from detailsview and shows error Cannot insert the value NULL into column 'CreatedBy', table 'D:\OAS\OAS\APP_DATA\ASPNETDB.MDF.dbo.OASTest'; column does not allow nulls. INSERT fails. The statement has been terminated. For some reason parameters value is not being set. Can anybody bother to understand this and help?

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Why are virtual methods considered early bound?

    - by AspOnMyNet
    One definition of binding is that it is the act of replacing function names with memory addresses. a) Thus I assume early binding means function calls are replaced with memory addresses during compilation process, while with late binding this replacement happens during runtime? b) Why are virtual methods also considered early bound (thus the target method is found at compile time, and code is created that will call this method)? As far as I know, with virtual methods the call to actual method is resolved only during runtime and not compile time?! thanx EDIT: 1) A a=new A(); a.M(); As far as I know, it is not known at compile time where on the heap (thus at which memory address ) will instance a be created during runtime. Now, with early binding the function calls are replaced with memory addresses during compilation process. But how can compiler replace function call with memory address, if it doesn’t know where on the heap will object a be created during runtime ( here I’m assuming the address of method a.M will also be at same memory location as a )? 2) v-table calls are neither early nor late bound. Instead there's an offset into a table of function pointers. The offset is fixed at compile time, but which table the function pointer is chosen from depends on the runtime type of the object (the object contains a hidden pointer to its v-table), so the final function address is found at runtime. But assuming the object of type T is created via reflection ( thus app doesn’t even know of existence of type T ), then how can at compile time exist an entry point for that type of object?

    Read the article

  • Linq is returning too many results when joined

    - by KallDrexx
    In my schema I have two database tables. relationships and relationship_memberships. I am attempting to retrieve all the entries from the relationship table that have a specific member in it, thus having to join it with the relationship_memberships table. I have the following method in my business object: public IList<DBMappings.relationships> GetRelationshipsByObjectId(int objId) { var results = from r in _context.Repository<DBMappings.relationships>() join m in _context.Repository<DBMappings.relationship_memberships>() on r.rel_id equals m.rel_id where m.obj_id == objId select r; return results.ToList<DBMappings.relationships>(); } _Context is my generic repository using code based on the code outlined here. The problem is I have 3 records in the relationships table, and 3 records in the memberships table, each membership tied to a different relationship. 2 membership records have an obj_id value of 2 and the other is 3. I am trying to retrieve a list of all relationships related to object #2. When this linq runs, _context.Repository<DBMappings.relationships>() returns the correct 3 records and _context.Repository<DBMappings.relationship_memberships>() returns 3 records. However, when the results.ToList() executes, the resulting list has 2 issues: 1) The resulting list contains 6 records, all of type DBMappings.relationships(). Upon further inspection there are 2 for each real relationship record, both are an exact copy of each other. 2) All relationships are returned, even if m.obj_id == 3, even though objId variable is correctly passed in as 2. Can anyone see what's going on because I've spent 2 days looking at this code and I am unable to understand what is wrong. I have joins in other linq queries that seem to be working great, and my unit tests show that they are still working, so I must be doing something wrong with this. It seems like I need an extra pair of eyes on this one :)

    Read the article

  • Android ExpandableListView From Database

    - by SterAllures
    I want to create a two level ExpandableListView from a local database. The group level I want to get the names from the database Category table category_id | name ------------------- 1 | NameOfCategory the children level I want to get the names from the List table list_id | name | foreign_category_id -------------------------------- 1 | Listname | 1 I've got a method in DatabaseDAO to get all the values from the table public List<Category> getAllCategories() { database = dbHelper.getReadableDatabase(); List<Category> categories = new ArrayList<Category>(); Cursor cursor = database.query(SQLiteHelper.TABLE_CATEGORY, null, null, null, null, null, null); cursor.moveToFirst(); while(!cursor.isAfterLast()) { Category category = cursorToCategory(cursor); categories.add(category); cursor.moveToNext(); } cursor.close(); return categories; } Now adding the names to the group level is easy I do it the following way: ArrayList<String> groups; for(Category c : databaseDao.getAllCategories()) { groups.add(c.getName()); } Now I want to add the children to the ExpandableListView in the following array ArrayList<ArrayList<ArrayList<String>>> children;. How do I get the children under the correct group name? I think it has to do somenthing with groups.indexOf() but in the list table I only have a foreign category_id and not the actual name.

    Read the article

  • Hotkey to toggle checkboxes does opposite

    - by Joel Harris
    In this jsFiddle, I have a table with checkboxes in the first column. The master checkbox in the table header functions as expected toggling the state of all the checkboxes in the table when it is clicked. I have set up a hotkey for "shift-x" to toggle the master checkbox. The desired behavior when using the hotkey is: The master checkbox is toggled The child checkboxes each have their checked state set to match the master But what is actually happening is the opposite... The child checkboxes each have their checked state set to match the master The master checkbox is toggled Here is the relevant code $(".master-select").click(function(){ $(this).closest("table").find("tbody .row-select").prop('checked', this.checked); }); function tickAllCheckboxes() { var master = $(".master-select").click(); } //using jquery.hotkeys.js to assign hotkey $(document).bind('keydown', 'shift+x', tickAllCheckboxes); This results in the child checkboxes having the opposite checked state of the master checkbox. Why is that happening? A fix would be nice, but I'm really after an explanation so I can understand what is happening.

    Read the article

  • array or list into Oracle using cfprocparam

    - by Travis
    I have a list of values I want to insert into a table via a stored procedure. I figured I would pass an array to oracle and loop through the array but I don't see how to pass an array into Oracle. I'd pass a list but I don't see how to work with the list to turn it into an array using PL/SQL (I'm fairly new to PL/SQL). Am I approaching this the wrong way? Using Oracle 9i and CF8. TIA! EDIT Perhaps I'm thinking about this the wrong way? I'm sure I'm not doing anything new here... I figured I'd convert the list to an associative array then loop the array because Oracle doesn't seem to work well with lists (in my limited observation). I'm trying to add a product, then add records for the management team. -- product table productName = 'foo' productDescription = 'bar' ... ... etc -- The managementteam table just has the id of the product and id of the users selected from a drop down. The user IDs are passed in via a list like "1,3,6,20" How should I go about adding the records to the management team table?

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • SQL: Join multiple tables and get a grouped sum

    - by Scienceprodigy
    I have a database with 3 tables that have related data. One table has transactions, and the other two relate to transaction categories. Basically it's financial data, so each transaction has a category (i.e. "gasoline" for a gas purchase transaction). A short version of my Transactions table looks like this- Transactions Table: ________________________________ | ID | Type | Amount | Category | --------------------------------- I also have two more tables relating a category to a categories parent. So basically, every Category entry in the Transactions Table belongs to a parent category (i.e. "gasoline" would belong to say "Automotive Expenses"). For categories, and their parent, I have two tables - Category Children: ____________________________________________ | ID | Parent Category ID | Child Category | -------------------------------------------- Category Parent: ________________________ | ID | Parent Category | ------------------------ What I'm trying to do is query the database and have it return a total spending by parent category. To get "spending" the Type of transactions must be "Debit". I tried the following statement: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category = 'category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id = category_parents._id WHERE trans_type = 'Debit' GROUP BY parent_category ORDER BY totals DESC but it gives me the following exception: 12-31 13:51:21.515: ERROR/Exception on query(4403): android.database.sqlite.SQLiteException: no such column: category_children.parent_category_id: , while compiling: SELECT category_parents.parent_category, SUM(amount) AS totals FROM (transactions INNER JOIN category_children ON transactions.category='category_children.child_category') INNER JOIN category_parents ON category_children.parent_category_id=category_parents._id where trans_type='Debit' group by parent_category order by totals desc Any help is appreciated. (EXTRA CREDIT: I also need to make another statement to do spending by child category, given the parent category)

    Read the article

  • MVC : Checkboxes generated using JavaScript not appearing in FormCollection on postback

    - by Andy Evans
    I took over another project (written by one contractor, modified by another and now it's not working) written using MVC/C# where a view that has a table (see below) is dynamically populated using JSON/Javascript - the first column of which is a checkbox. View (spark view engine) <table id='component_list' name='component_list' cellpadding='0' border='0' cellspacing='0'> <thead> <tr> <th>&nbsp;</th> <th>Component</th> <th>Component Type</th> <th>Evenflo Part #</th> <th>Supplier Part #</th> <th>Supplier</th> <th>Requirement</th> <th>Location</th> <th>Region</th> </tr> </thead> <tbody> </tbody> </table> When the page is rendered, I look at the source for the page and do not see the table data (I wouldn't expect to see this). However, when the form is posted back, controller, the FormCollection is empty. Supposedly this had been working before the last contractor got their hands on it - which is another post all together. My goal right now is having the checkboxes in the FormCollection. Any suggestions would be greatly appreciated. Thanks,

    Read the article

  • Is NSManagedObjectContext autosaved or am I looking at NSFetchedResultsController's cache?

    - by Andreas
    I'm developing an iPhone app where I use a NSFetchedResultsController in the main table view controller. I create it like this in the viewDidload of the main table view controller: NSSortDescriptor *sortDescriptorDate = [[NSSortDescriptor alloc] initWithKey:@"date" ascending:YES]; NSSortDescriptor *sortDescriptorTime = [[NSSortDescriptor alloc] initWithKey:@"start" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptorDate,sortDescriptorTime, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptorDate release]; [sortDescriptorTime release]; [sortDescriptors release]; controller = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:context sectionNameKeyPath:@"date" cacheName:nil]; [fetchRequest release]; NSError *error; BOOL success = [controller performFetch:&error]; Then, in a subsequent view, I create a new object on the context: TestObject *testObject = [NSEntityDescription insertNewObjectForEntityForName:@"TestObject" inManagedObjectContext:context]; The TestObject has several related object which I create in the same way and add to the testObject using the provided add...Objects methods. Then, if before saving the context, I press cancel and go back to the main table view, nothing is shown as expected. However, if I restart the app, the object I created on the context shows in the main table view. How come? At first, I thought it was because the NSFetchedResultsController was reading from the cache, but as you can see I set this to nil just to test. Also, [context hasChanges] returns true after I restart. What am I missing here?

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >