Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 256/286 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • What changed in the DataGrid that means it won't work anymore?

    - by Jeff Yates
    I have a Silverlight app with a DataGrid containing some custom columns and all was working well. Then I updated to Silverlight 3 tools for VS 2008 SP1 and rebuilt it. Now it has the following problems: Rows aren't added when the collection is modified. The ItemsSource property is (and always has been) set to an ObservableCollection instance, which notifies when its contents change. This worked fine for Silverlight 2. However, in Silverlight 3 to get this working at all, I now have to null and then re-set ItemsSource - this seems like I'm hiding a bigger issue but I can't work out what that might be. I cannot select a row or a cell anymore. If I'm lucky, I can select one whole row before it stops working. I can't edit anything. I suspect this is related to the previous point. I'll post some source when I am able, but first I have to strip it down to the bare minimum. In the meantime, I was hoping someone might have some idea of what may be going on here. My gut feeling on the second two points is that my bindings are no longer working, but that's just a guess and if it is the case, I have no idea which ones. Thanks for any help anyone might be able to provide. Update So, I finally reduced my problem down to a simple works/doesn't work comparison. The problem seems to occur if I override Equals in my element type. As soon as I do that, something happens strangely in the ObservableCollection that contains that type, it seems, and my application breaks. To make it more interesting, there is a check to make sure that duplicate items don't even get close to being added to the collection. I don't exactly know why ObservableCollection needs to compare equality when inserting items (the stack trace indicates it is using IndexAt) but this seems to cause the issue. So, any thoughts?

    Read the article

  • In Excel 2010, how can I show a count of occurrences on a specific date within multiple time ranges?

    - by Justin
    Here's what I'm trying to do. I have three columns of data. ID, Date(MM/DD/YY), Time(00:00). I need to create a chart or table that shows the number of occurrences on, say, 12/10/2010 between 00:00 and 00:59, 1:00 and 1:59, etc, for each hour of the day. I can do countif and get results for the date, but I cannot figure out how to show a summary of the count of occurrences per hour for the 24 hour period. I have months of data and many times each day. Example of data set is below. Any help is greatly ID Date Time 221 12/10/2010 00:01 223 12/10/2010 00:45 227 12/10/2010 01:13 334 12/11/2010 14:45 I would like the results to read: Date Time Count 12/10/2010 00:00AM - 00:59AM 2 12/10/2010 01:00AM - 01:59AM 1 12/10/2010 02:00AM - 02:59AM 0 ......(continues for every hour of the day) 12/11/2010 00:00AM - 00:59AM 0 ......... 12/11/2010 14:00PM - 14:59PM 1 And so on. Sorry for the length but I wanted to be clear. EDIT Here is a sample spreadsheet. Very little data, but I couldn't figure out a better way without having a huge file. Tested in notepad for formatting and worked ok on import as csv. PID,Date,Time 2888759,12/10/2010,0:10 2888760,12/10/2010,0:10 2888761,12/10/2010,0:10 2888762,12/10/2010,0:11 2889078,12/10/2010,15:45 2889079,12/10/2010,15:57 2889080,12/10/2010,15:57 2889081,12/10/2010,15:58 2889082,12/10/2010,16:10 2889083,12/10/2010,16:11 2889084,12/10/2010,16:11 2889085,12/10/2010,16:12 2889086,12/10/2010,16:12 2889087,12/10/2010,16:12 2889088,12/10/2010,16:13 2891529,12/14/2010,16:21

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • T4 trouble compiling transformation

    - by John Leidegren
    I can't figure this one out. Why doesn't T4 locate the IEnumerable type? I'm using Visual Studio 2010. And I just hope someone knows why? <#@ template debug="true" hostspecific="false" language="C#" #> <#@ assembly name="System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" #> <#@ import namespace="System" #> <#@ import namespace="System.Data" #> <#@ import namespace="System.Data.SqlClient" #> <#@ output extension=".cs" #> public static class Tables { <# var q = @" SELECT tbl.name 'table', col.name 'column' FROM sys.tables tbl INNER JOIN sys.columns col ON col.object_id = tbl.object_id "; // var source = Execute(q); #> } <#+ static IEnumerable Execute(string cmdText) { using (var conn = new SqlConnection(@"Data Source=.\SQLEXPRESS;Initial Catalog=t4build;Integrated Security=True;")) { conn.Open(); var cmd = new SqlCommand(cmdText, conn); using (var reader = cmd.ExecuteReader()) { while (reader.Read()) { } } } } #> Error 2 Compiling transformation: The type or namespace name 'IEnumerable' could not be found (are you missing a using directive or an assembly reference?) c:\Projects\T4BuildApp\T4BuildApp\TextTemplate1.tt 26 9

    Read the article

  • How to set a EditText in a certain column of a TableLayout?

    - by Nick
    I have a TableLayout on one Android Activity UI. It has two columns. Now I need to add a new row, and put an EditText box in second column of that new row. And also, I want that EditText full fill the whole cell. I have some code like this: TableRow tr = new TableRow(context); EditText et = new EditText(context); et.SetMaxLines(4); etText.setLayoutParams(new TableRow.LayoutParams(1)); //set it to the second coloumn tr.addView(et); tl.addView(tr); //tl is the tableLayout It puts the EditText in the second column fine, but the EditText is too small. I tried to use etText.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); but that seems to disabled the TableRow.LayoutParams setting. I guess each control can only have one LayoutParamas setting. So, how to make the EditText as a 4 lines text editor and also make sure it is in the second column of that row? Thanks.

    Read the article

  • For-Loop and LINQ's deferred execution don't play well together

    - by Tim Schmelter
    The title suggests that i've already an idea what's going on, but i cannot explain it. I've tried to order a List<string[]> dynamically by each "column", beginning with the first and ending with the minimum Length of all arrays. So in this sample it is 2, because the last string[] has only two elements: List<string[]> someValues = new List<string[]>(); someValues.Add(new[] { "c", "3", "b" }); someValues.Add(new[] { "a", "1", "d" }); someValues.Add(new[] { "d", "4", "a" }); someValues.Add(new[] { "b", "2" }); Now i've tried to order all by the first and second column. I could do it statically in this way: someValues = someValues .OrderBy(t => t[0]) .ThenBy(t => t[1]) .ToList(); But if i don't know the number of "columns" i could use this loop(that's what I thought): int minDim = someValues.Min(t => t.GetLength(0)); // 2 IOrderedEnumerable<string[]> orderedValues = someValues.OrderBy(t => t[0]); for (int i = 1; i < minDim; i++) { orderedValues = orderedValues.ThenBy(t => t[i]); } someValues = orderedValues.ToList(); // IndexOutOfRangeException But that doesn't work, it fails with an IndexOutOfRangeException at the last line. The debugger tells me that i is 2 at that time, so the for-loop condition seems to be ignored, i is already == minDim. Why is that so?

    Read the article

  • Stretching across 2 rows in Table Layout

    - by Will03uk
    How do I stretch across 2 columns in the Table Layout. I have 2 rows with a label and edit text on 1 row and I want to have a single button stretch across the whole second row. <?xml version="1.0" encoding="utf-8"?> <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="fill_parent" android:layout_width="fill_parent" android:background="#000000" android:stretchColumns="1"> <TableRow> <TextView android:text = "Name: " /> <EditText android:id = "@+id/txtAddName" android:gravity = "right" android:layout_width = "fill_parent" /> </TableRow> <TableRow> <TextView android:text = "Phone: " /> <EditText android:id = "@+id/txtAddPhone" android:gravity = "right" android:layout_width = "fill_parent" /> </TableRow> <TableRow> <Button android:id = "@+id/btnAdd" android:text = "Add Entrie" android:layout_width = "fill_parent" /> </TableRow> <TableRow> <Button android:id = "@+id/btnShow" android:text = "Show all Entries" android:layout_width = "fill_parent" /> </TableRow> <TableRow> <Button android:id = "@+id/btnDelete" android:text = "Delete all Entries" android:layout_width = "fill_parent" /> </TableRow> </TableLayout>

    Read the article

  • Building a survey to put in a WordPress website using Python/Django

    - by chiurox
    So I've been given a task to build a survey to get data regarding time slot preferences of prospective students for a particular course. I know there are really quick solutions to this like Google Forms, SurveyMonkey, but since it's not unusually hard, I want to implement the survey myself in a totally new language as an opportunity to get started with it and also be able to customize and provide dynamic info to the users who are voting. Although I have done some stuff in PHP, C++, javascript, etc, I'm pretty new to Python+Django framework but it's something I've been meaning to get into since a long time ago. Initially, what I want is to make a grid with the days of the week as columns and time-durations as rows. In each cell I want to provide users a way to choose how strong (high/medium/low) their preference for this particular day+time is. I also want to show how many "votes" have already been cast for this particular preference because this will influence a lot in their decisions and as a result make this process easier when we are going to define the classes. I'll probably store the data in MySQL. Could anyone point me to some really good Python+Django tutorials for my particular purpose? Does anyone think I'm wasting my time with this trivial task by choosing new tools and that I should just use something I already know (like PHP) or a free service or plugin for Wordpress? Thanks!

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • MySQL: Combining multiple where conditions

    - by Karl
    I'm working on a menu system that takes a url and then queries the db to build the menu. My menu table is: +---------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | node_id | int(11) | YES | | NULL | | | parent | int(11) | YES | | NULL | | | weight | int(11) | YES | | NULL | | | title | varchar(250) | YES | | NULL | | | alias | varchar(250) | YES | | NULL | | | exclude | int(11) | YES | | NULL | | +---------+--------------+------+-----+---------+----------------+ The relevant columns for my question are alias, parent and node_id. So for a url like: http://example.com/folder1/folder2/filename Alias would potentially = "filename", "folder1", "folder2" Parent = the node_id of the parent folder. What I know is how to split the url up into an array and check the alias for a match to each part. What I don't know is how to have it then filter by parent whose alias matches "folder2" and whose parent alias matches "folder1". I'm imagining a query like so: select * from menu where alias='filename' and where parent = node_id where alias='folder2' and parent = node_id where alias='folder1' Except I know that the above is wrong. I'm hoping this can be done in a single query. Thanks for any help in advance!

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • Entity Framework one-to-one relationship mapping flattened in code

    - by Josh Close
    I have a table structure like so. Address: AddressId int not null primary key identity ...more columns AddressContinental: AddressId int not null primary key identity foreign key to pk of Address County State AddressInternational: AddressId int not null primary key identity foreign key to pk of Address ProvinceRegion I don't have control over schema, this is just the way it is. Now, what I want to do is have a single Address object. public class Address { public int AddressId { get; set; } public County County { get; set; } public State State { get; set } public ProvinceRegion { get; set; } } I want to have EF pull it out of the database as a single entity. When saving, I want to save the single entity and have EF know to split it into the three tables. How would I map this in EF 4.1 Code First? I've been searching around and haven't found anything that meets my case yet. UPDATE An address record will have a record in Address and one in either AddressContinental or AddressInternational, but not both.

    Read the article

  • C# Convert string to nullable type (int, double, etc...)

    - by Nathan Koop
    I am attempting to do some data conversion. Unfortunately, much of the data is in strings, where it should be int's or double, etc... So what I've got is something like: double? amount = Convert.ToDouble(strAmount); The problem with this approach is if strAmount is empty, if it's empty I want it to amount to be null, so when I add it into the database the column will be null. So I ended up writing this: double? amount = null; if(strAmount.Trim().Length>0) { amount = Convert.ToDouble(strAmount); } Now this works fine, but I now have five lines of code instead of one. This makes things a little more difficult to read, especially when I have a large amount of columns to convert. I thought I'd use an extension to the string class and generic's to pass in the type, this is because it could be a double, or an int, or a long. So I tried this: public static class GenericExtension { public static Nullable<T> ConvertToNullable<T>(this string s, T type) where T: struct { if (s.Trim().Length > 0) { return (Nullable<T>)s; } return null; } } But I get the error: Cannot convert type 'string' to 'T?' Is there a way around this? I am not very familiar with creating methods using generics.

    Read the article

  • working with a csv with odd encapsulation // php

    - by Patrick
    I have a CSV file that im working with, and all the fields are comma separated. But some of the fields themselves, contain commas. In the raw csv file, the fields that contain commas, are encapsulated with quotes, as seen here; "Doctor Such and Such, Medical Center","555 Scruff McGruff, Suite 103, Chicago IL 60652",(555) 555-5555,,,,something else the code im using is below <?PHP $file_handle = fopen("file.csv", "r"); $i=0; while (!feof($file_handle) ) { $line = fgetcsv($file_handle, 1024); $c=0; foreach($line AS $key=>$value){ if($i != 0){ if($c == 0){ echo "[ROW $i][COL $c] - $value"; //First field in row, show row # }else{ echo "[COL $c] - $value"; // Remaining fields in row } } $c++; } echo "<br>"; // Line Break to next line $i++; } fclose($file_handle); ?> The problem is im getting the fields with the comma's split into two fields, which messes up the number of columns im supposed to have. Is there any way i could search for comma's within quotes and convert them, or another way to deal with this?

    Read the article

  • iPhone - Using sql database - insert statement failing

    - by Satyam svv
    Hi, I'm using sqlite database in my iphone app. I've a table which has 3 integer columns. I'm using following code to write to that database table. -(BOOL)insertTestResult { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* dataBasePath = [documentsDirectory stringByAppendingPathComponent:@"test21.sqlite3"]; BOOL success = NO; sqlite3* database = 0; if(sqlite3_open([dataBasePath UTF8String], &database) == SQLITE_OK) { BOOL res = (insertResultStatement == nil) ? createStatement(insertResult, &insertResultStatement, database) : YES; if(res) { int i = 1; sqlite3_bind_int(insertResultStatement, 0, i); sqlite3_bind_int(insertResultStatement, 1, i); sqlite3_bind_int(insertResultStatement, 2, i); int err = sqlite3_step(insertResultStatement); if(SQLITE_ERROR == err) { NSAssert1(0, @"Error while inserting Result. '%s'", sqlite3_errmsg(database)); success = NO; } else { success = YES; } sqlite3_finalize(insertResultStatement); insertResultStatement = nil; } } sqlite3_close(database); return success;} The command sqlite3_step is always giving err as 19. I'm not able to understand where's the issue. Tables are created using following queries: CREATE TABLE [Patient] (PID integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,PFirstName text NOT NULL,PLastName text,PSex text NOT NULL,PDOB text NOT NULL,PEducation text NOT NULL,PHandedness text,PType text) CREATE TABLE PatientResult(PID INTEGER,PFreeScore INTEGER NOT NULL,PForcedScore INTEGER NOT NULL,FOREIGN KEY (PID) REFERENCES Patient(PID)) I've only one entry in Patient table with PID = 1 BOOL createStatement(const char* query, sqlite3_stmt** stmt, sqlite3* database){ BOOL res = (sqlite3_prepare_v2(database, query, -1, stmt, NULL) == SQLITE_OK); if(!res) NSLog( @"Error while creating %s => '%s'", query, sqlite3_errmsg(database)); return res;}

    Read the article

  • can this problem be solved with a single SQL query?

    - by PierrOz
    I have the two following tables (with some sample datas) LOGS: ID | SETID | DATE ======================== 1 | 1 | 2010-02-25 2 | 2 | 2010-02-25 3 | 1 | 2010-02-26 4 | 2 | 2010-02-26 5 | 1 | 2010-02-27 6 | 2 | 2010-02-27 7 | 1 | 2010-02-28 8 | 2 | 2010-02-28 9 | 1 | 2010-03-01 STATS: ID | OBJECTID | FREQUENCY | STARTID | ENDID ============================================= 1 | 1 | 0.5 | 1 | 5 2 | 2 | 0.6 | 1 | 5 3 | 3 | 0.02 | 1 | 5 4 | 4 | 0.6 | 2 | 6 5 | 5 | 0.6 | 2 | 6 6 | 6 | 0.4 | 2 | 6 7 | 1 | 0.35 | 3 | 7 8 | 2 | 0.6 | 3 | 7 9 | 3 | 0.03 | 3 | 7 10 | 4 | 0.6 | 4 | 8 11 | 5 | 0.6 | 4 | 8 7 | 1 | 0.45 | 5 | 9 8 | 2 | 0.6 | 5 | 9 9 | 3 | 0.02 | 5 | 9 Every day new logs are analyzed on different sets of objects and stored in table LOGS. Among other processes, some statistics are computed on the objects contained into these sets and the result are stored in table STATS. These statistic are computed through several logs (identified by the STARTID and ENDID columns). So, what could be the SQL query that would give me the latest computed stats for all the objects with the corresponding log dates. In the given example, the result rows would be: OBJECTID | SETID | FREQUENCY | STARTDATE | ENDDATE ====================================================== 1 | 1 | 0.45 | 2010-02-27 | 2010-03-01 2 | 1 | 0.6 | 2010-02-27 | 2010-03-01 3 | 1 | 0.02 | 2010-02-27 | 2010-03-01 4 | 2 | 0.6 | 2010-02-26 | 2010-02-28 5 | 2 | 0.6 | 2010-02-26 | 2010-02-28 So, the most recent stats for set 1 are computed with logs from feb 27 to march 1 whereas stats for set 2 are computed from feb 26 to feb 28. object 6 is not in the results rows as there is no stat on it within the last period of time. Last thing, I use MySQL. Any Idea ?

    Read the article

  • NHibernate class referencing discriminator based subclass

    - by Rich
    I have a generic class Lookup which contains code/value properties. The table PK is category/code. There are subclasses for each category of lookup, and I've set the discriminator column in the base class and its value in the subclass. See example below (only key pieces shown): public class Lookup { public string Category; public string Code; public string Description; } public class LookupClassMap { CompositeId() .KeyProperty(x = x.Category, "CATEGORY_ID") .KeyProperty(x = x.Code, "CODE_ID"); DiscriminateSubclassesBasedOnColumn("CATEGORY_ID"); } public class MaritalStatus: Lookup {} public class MartialStatusClassMap: SubclassMap { DiscriminatorValue(13); } This all works. Here's the problem. When a class has a property of type MaritalStatus, I create a reference based on the contained code ID column ("MARITAL_STATUS_CODE_ID"). NHibernate doesn't like it because I didn't map both primary key columns (Category ID & Code ID). But with the Reference being of type MaritalStatus, NHibernate should already know what the value of the category ID is going to be, because of the discriminator value. What am I missing?

    Read the article

  • Rollback doesn't work in MySQLdb

    - by Anton Barycheuski
    I have next code ... db = MySQLdb.connect(host=host, user=user, passwd=passwd, db=db, charset='utf8', use_unicode=True) db.autocommit(False) cursor = db.cursor() ... for col in ws.columns[1:]: data = (col[NUM_ROW_GENERATION].value, 1, type_topliv_dict[col[NUM_ROW_FUEL].value]) fullgeneration_id = data[0] type_topliv = data[2] if data in completions_set: compl_id = completions_dict[data] else: ... sql = u"INSERT INTO completions (type, mark, model, car_id, type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period_url) VALUES (1, '%s', '%s', 0, %s, %s, %s, %s, '%s', '%s', '%s', '%s')" % (marks_dict[mark_id], models_dict[model_id], type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period.replace(' ', '_').replace(u'?.?.', 'nv') ) inserted_completion += cursor.execute(sql) cursor.execute("SELECT fullgeneration_id, type, type_topliv, id FROM completions where fullgeneration_id = %s AND type_topliv = %s" % (fullgeneration_id, type_topliv)) row = cursor.fetchone() compl_id = row[3] if is_first_car: deleted_compl_rus = cursor.execute("delete from compl_rus where compl_id = %s" % compl_id) for param, row_id in params: sql = u"INSERT INTO compl_rus (compl_id, modification, groupparam, param, paramvalue) VALUES (%s, '%s', '%s', '%s', %s)" % (compl_id, col[NUM_ROW_MODIFICATION].value, param[0], param[1], col[row_id].value) inserted_compl_rus += cursor.execute(sql) is_first_car = False db.rollback() print '\nSTATISTICS:' print 'Inserted completion:', inserted_completion print 'Inserted compl_rus:', inserted_compl_rus print 'Deleted compl_rus:', deleted_compl_rus ans = raw_input('Commit changes? (y/n)') db.close() I has manually deleted records from table and than run script two times. See https://dpaste.de/MwMa . I think, that rollback in my code doesn't work. Why?

    Read the article

  • Shaping EF LINQ Query Results Using Multi-Table Includes

    - by sisdog
    I have a simple LINQ EF query below using the method syntax. I'm using my Include statement to join four tables: Event and Doc are the two main tables, EventDoc is a many-to-many link table, and DocUsage is a lookup table. My challenge is that I'd like to shape my results by only selecting specific columns from each of the four tables. But, the compiler is giving a compiler is giving me the following error: 'System.Data.Objects.DataClasses.EntityCollection does not contain a definition for "Doc' and no extension method 'Doc' accepting a first argument of type 'System.Data.Objects.DataClasses.EntityCollection' could be found. I'm sure this is something easy but I'm not figuring it out. I haven't been able to find an example of someone using the multi-table include but also shaping the projection. Thx,Mark var qry= context.Event .Include("EventDoc.Doc.DocUsage") .Select(n => new { n.EventDate, n.EventDoc.Doc.Filename, //<=COMPILER ERROR HERE n.EventDoc.Doc.DocUsage.Usage }) .ToList(); EventDoc ed; Doc d = ed.Doc; //<=NO COMPILER ERROR SO I KNOW MY MODEL'S CORRECT DocUsage du = d.DocUsage;

    Read the article

  • Check the code n point the mistake

    - by Vibha
    here is the code: Ext.onReady(function(){ alert("inside onReady"); Ext.QuickTips.init(); var employee = Ext.data.Record.create([ {name:'firstname'}, {name:'lastname'}]); var myReader = new Ext.data.JsonReader({ root:"EmpInfo", },employee); var store = new Ext.data.JsonStore({ id:'ID' ,root:'EmpInfo' ,totalProperty:'totalCount' ,url:'test.php' ,autoLoad:true ,fields:[ {name:'firstname', type:'string'} ,{name:'lastname', type:'string'} ] }); var myPanel = new Ext.grid.GridPanel({ store: store ,columns:[{ dataIndex:'firstname' ,header:'First Name' ,width:139 },{ dataIndex:'lastname' ,header:'Middle Name' ,width:139 } ] }); var myWindow = new Ext.Window({ width:300, height:300, layout:'fit', closable:false, resizable:false, items:[myPanel] }); myWindow.show(); }); And php code is: true, "data" = array( "firstname" = "ABC" , "lastname" = "MNO") ); $_SESSION["err"] = isset($_SESSION["err"]) ? !$_SESSION["err"] : true; header("Content-Type: application/json"); echo json_encode($o); ? I want to print the values ABC and MNO in the grid panel. i'm using extjs 2.3. please help me out. Thanks

    Read the article

  • Create unique identifier for different row-groups

    - by Max van der Heijden
    I want to number certain combinations of row in a dataframe (which is ordered on ID and on Time) tc <- textConnection(' id time end_yn number abc 10 0 1 abc 11 0 2 abc 12 1 3 abc 13 0 1 def 10 0 1 def 15 1 2 def 16 0 1 def 17 0 2 def 18 1 3 ') test <- read.table(tc, header=TRUE) The goal is to create a new column ("journey_nr") that give a unique number to each row based on the journey it belongs to. Journeys are defined as a sequence of rows per id up until to end_yn == 1, also if end_ynnever becomes 1, the journey should also be numbered (see the expected outcome example). It is only possible to have end_yn == 0 journeys at the end of a collection of rows for an ID (as shown at row 4 for id 3). So either no end_yn == 1 has occured for that ID or that happened before the end_yn == 0-journey (see id == abc in the example). I know how to number using the data.table package, but I do not know which columns to combine in order to get the expected outcome. I've searched the data.table-tag on SO, but could not find a similar problem. Expected outcome: id time end_yn number journey abc 10 0 1 1 abc 11 0 2 1 abc 12 1 3 1 abc 13 0 1 2 def 10 0 1 3 def 15 1 2 3 def 16 0 1 4 def 17 0 2 4 def 18 1 3 4

    Read the article

  • How can manage a FIFO-queue in an database with SQL?

    - by Jonas
    I have two tables in my database, one for In and one for Out. They have two columns, Quantity and Price. How can I write a SQL-query that selects the correct price? In example: If I have 3 items in for 75 and then 3 items in for 80. Then I have two out for 75, and the third out should be for 75 (X) and the fourth out should be for 80 (Y). How can I write the price query for X and Y? They should use the price from the third and forth row. In example, is there any way to SELECT the third row in the In-table? I can not use auto_increment as identifier for i.e. "third" row, because the tables will contain post for other items too. The rows will not be deleted, they will be saved for accountability reasons. SELECT Price FROM In WHERE ...? NEW database design: +----+ | In | +----+------+-------+ | Supply_ID | Price | +-----------+-------+ | 1 | 75 | | 1 | 75 | | 1 | 75 | | 2 | 80 | | 2 | 80 | +-----------+-------+ +-----+ | Out | +-----+-------+-------+ | Delivery_ID | Price | +-------------+-------+ | 1 | 75 | | 1 | 75 | | 2 | X | <- ? | 3 | Y | <- ? +-------------+-------+ OLD database design: +----+ | In | +----+------+----------+-------+ | Supply_ID | Quantity | Price | +-----------+----------+-------+ | 1 | 3 | 75 | | 2 | 3 | 80 | +-----------+----------+-------+ +-----+ | Out | +-----+-------+----------+-------+ | Delivery_ID | Quantity | Price | +-------------+----------+-------+ | 1 | 2 | 75 | | 2 | 1 | X | <- ? | 3 | 1 | Y | <- ? +-------------+----------+-------+

    Read the article

  • Telerik ASP.NET MVC2 Grid Delete Function with compound key.

    - by Dani
    I have a grid with a compound key: OrderId, ItemID. When I update the grid - the public ActionResult UpdateItemGridAjax(int OrderID, string ItemID) Gets both values from the grid. When I delete a row I get only the first one: public ActionResult DeleteItemGridAjax(int OrderID, string ItemID) Why is it happens and how can I get the ItemId value of the deleted row ? Grid Definition: <%= Html.Telerik().Grid<ItemsInOrderPOCO>() .Name("ItemsInOrderGrid") .DataKeys(dataKeys => { dataKeys.Add(e => e.OrderID); dataKeys.Add(e => e.ItemID); }) .ToolBar(commands => commands.Insert()) .DataBinding(dataBinding => { dataBinding.Ajax() //Ajax binding .Select("SelectItemGridAjax", "Orders", new { OrderID = Model.myOrder.OrderID }) .Insert("InsertItemGridAjax", "Orders", new { OrderID = Model.myOrder.OrderID }) .Update("UpdateItemGridAjax", "Orders") .Delete("DeleteItemGridAjax", "Orders"); }) .Columns(c => { c.Bound(o => o.ItemID); c.Bound(o => o.OrderID).Column.Visible = false; c.Bound(o => o.ItemDescription); c.Bound(o => o.NumOfItems); c.Bound(o => o.CostOfItem); c.Bound(o => o.TotalCost); c.Bound(o => o.SupplyDate); c.Command(commands => { commands.Edit(); commands.Delete(); }).Width(180).Title("Upadte"); })

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >