Search Results

Search found 3700 results on 148 pages for 'strongly typed dataset'.

Page 20/148 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Key stroke time in Openmoko or any smart phones

    - by Adi
    Dear all, I am doing a project in which I am working on security issues related to smart phones. I want to develop an authentication scheme which is based on biometrics, Every human being have a unique key-hold time,digraph time error rate. Key-Hold Time : Time difference between pressing and releasing a key . Digraph Time : Time difference between releasing one and pressing next one. Error Rate : No of times backspace is pressed. I got these metrics from a paper "Keystroke-based User Identification on Smart Phones" by Saira Zahid1, Muhammad Shahzad1, Syed Ali Khayam1,2, Muddassar Farooq1. I was planning to get the datasets to test my algorithm from openmoko phone, but the phone is mis-behaving and I am finding trouble in generating these time data-sets. If anyone can help me or tell me a good source of data sets for the 3 metrics I defined, it will be a great help. Thanks Aditya

    Read the article

  • DataAdapter Select string from base table schema?

    - by MattSlay
    When I built my .xsd, I had to choose the columns for each table, and it made a schema for the tables, right? So how can I get that Select string to use as a base Select command for new instances of dataadapters, and then just append a Where and OrderBy clause to it as needed? That would keep me from having to keep each DataAdapter's field list (for the same table) in synch with the schema of that table in the .xsd file. Isn't it common to have several DataAdapters that work on a certain table schema, but with different params in the Where and OrderBy clauses? Surely one does not have to maintain (or even redundently build) the field list part of the Select strings for half a dozen DataAdapters that all work off of the same table schema. I'm envisioning something like this pseudo code: BaseSelectString = MyTypedDataSet.JobsTable.GetSelectStringFromSchema() // Is there such a method or technique? WhereClause = " Where SomeField = @Param1 and SomeOtherField = @Param2" OrderByClause = " Order By Field1, Field2" SelectString=BaseSelectString + WhereClause + OrderByClause OleDbDataAdapter adapter = new OleDbDataAdapter(SelectString, MyConn)

    Read the article

  • problem with null column

    - by Iulian
    One column in my database (of type double) has some null values. I am executing a sored procedure to get the data in my app wipDBTableAdapters.XLSFacturiTableAdapter TAFacturi = new wipDBTableAdapters.XLSFacturiTableAdapter(); var dtfacturi = TAFacturi.GetData(CodProiect); Then i try to do something like this: if (dtfacturi[i].CANTITATE == null) { //do something } this is giving a warning : The result of the expression is always 'false' since a value of type 'double' is never equal to 'null' of type 'double? However when i run my code i get the following exception: StrongTypingException The value for column 'CANTITATE' in table 'XLSFacturi' is DBNull. How am I supposed to resolve this ?

    Read the article

  • sql server 2005 reporting services-- how to use multiple datasets in report

    - by larryq
    Hi everyone, I'm new to SQL Server reporting services, and am trying to decipher an existing report. It's nothing too bad, but I notice it does have two report datasets defined. (They are generated via separate stored procedures) I'm trying to figure out where and how the report datasets are linked together so the Fields collection has both sets of columns available and the report has a single rowset to traverse. Is there a section in the report layout where a joining of datasets is defined? I'm using Visual Studio 2005 to design and preview the report fwiw. Thanks for your help!

    Read the article

  • how to prune data set?

    - by sakura90
    The MovieLens data set provides a table with columns: userid | movieid | tag | timestamp I have trouble reproducing the way they pruned the MovieLens data set used in: http://www.cse.ust.hk/~yzhen/papers/tagicofi-recsys09-zhen.pdf In 4.1 Data Set of the above paper, it writes "For the tagging information, we only keep those tags which are added on at least 3 distinct movies. As for the users, we only keep those users who used at least 3 distinct tags in their tagging history. For movies, we only keep those movies that are annotated by at least 3 distinct tags." I tried to query the database: select TMP.userid, count(*) as tagnum from (select distinct T.userid as userid, T.tag as tag from tags T) AS TMP group by TMP.userid having tagnum = 3; I got a list of 1760 users who labeled 3 distinct tags. However, some of the tags are not added on at least 3 distinct movies. Any help is appreciated.

    Read the article

  • Using set in Python inside a loop

    - by user210481
    I have the following list in Python: [[1, 2], [3, 4], [4, 6], [2, 7], [3, 9]] I want to group them into [[1,2,7],[3,4,6,7]] My code to do this looks like this: l=[[1, 2], [3, 4], [4, 6], [2, 7], [3, 9]] lf=[] for li in l: for lfi in lf: if lfi.intersection(set(li)): lfi=lfi.union(set(li)) break else: lf.append(set(li)) lf is my final list. I do a loop over l and lf and when I find an intersection between an element from l and another from lf, I would like to merge them (union) But I can't figure out why this is not working. The first to elements of the list l are being inserted with the append command, but the union is not working. My final list lf looks like [set([1, 2]), set([3, 4])] It seems to be something pretty basic, but I'm not familiar with sets. I appreciate any help Thanks

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • Looking for solution to persist rows sequence in a database table that allow efficient reordering at

    - by Chau Chee Yang
    I have a database table. There is a field name "Sequence" that indicate sequence when user presents the rows in report or grid visually. When the rows are retrieved and presented in a grid, there are few UI gadget that allow user to reorder the rows arbitrary. The new sequence will be persist in database table when user commit the changes. Here are some UI operations: Move to first Move to last Move up 1 row Move down 1 row multi-select rows and move up or down multi-select rows and drag to new position For operation like "Move to first" or "Move to Last", it usually involve many rows and the sequence those rows would need to be updated accordingly. This may not be efficient enough at runtime. It is a common practice to use INTEGER as sequence's data type. Other solution is using "DOUBLE" or "FLOAT" that could overcome the mass update of row sequence but we will face problem if we reach the limit of precision.

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • How to prune data set by frequency to conform to paper's description

    - by sakura90
    The MovieLens data set provides a table with columns: userid | movieid | tag | timestamp I have trouble reproducing the way they pruned the MovieLens data set used in: Tag Informed Collaborative Filtering, by Zhen, Li and Young In 4.1 Data Set of the above paper, it writes "For the tagging information, we only keep those tags which are added on at least 3 distinct movies. As for the users, we only keep those users who used at least 3 distinct tags in their tagging history. For movies, we only keep those movies that are annotated by at least 3 distinct tags." I tried to query the database: select TMP.userid, count(*) as tagnum from (select distinct T.userid as userid, T.tag as tag from tags T) AS TMP group by TMP.userid having tagnum >= 3; I got a list of 1760 users who labeled 3 distinct tags. However, some of the tags are not added on at least 3 distinct movies. Any help is appreciated.

    Read the article

  • Smartest way to import massive datasets into a Rails application?

    - by williamjones
    I've got multiple massive (multi gigabyte) datasets I need to import into a Rails app. The datasets are currently each in their own database on my development machine, and I need to read from them and create rows in tables in my Rails database based on the information they contain. The tables in my Rails database will not be exactly the same as the tables in the source databases. What's the smartest way to go about this? I was thinking migrations, but I'm not exactly sure how to connect the migration to the databases, and even if that is possible, is that going to be ridiculously slow?

    Read the article

  • How to find if dataTable contains column which name starts with abc

    - by VilemRousi
    In my program I have a dataTable and I´d like to know if is there a column which name starts with abc. For example I have a DataTable and its name is abcdef. I like to find this column using something like this: DataTable.Columns.Constains(ColumnName.StartWith(abc)) Because I know only part of the column name, I cannot use a Contains method. Is there any simple way how to do that? Thanks a lot.

    Read the article

  • Combining two data sets and plotting in matlab

    - by bautrey
    I am doing experiments with different operational amplifier circuits and I need to plot my measured results onto a graph. I have two data sets: freq1 = [.1 .2 .5 .7 1 3 4 6 10 20 35 45 60 75 90 100]; %kHz Vo1 = [1.2 1.6 1.2 2 2 2.4 14.8 20.4 26.4 30.4 53.6 68.8 90 114 140 152]; %mV V1 = 19.6; Acm = Vo1/(1000*V1); And: freq2 = [.1 .5 1 30 60 70 85 100]; %kHz Vo1 = [3.96 3.96 3.96 3.84 3.86 3.88 3.88 3.88]; %V V1 = .96; Ad = Vo1/(2*V1); (I would show my plots but apparently I need more reps for that) I need to plot the equation, CMRR vs freq: CMRR = 20*log10(abs(Ad/Acm)); The size of Ad and Acm are different and the frequency points do not match up, but the boundaries of both of these is the same, 100Hz to 100kHz (x-axis). On the line of CMRR, Matlab says that Ad and Acm matrix dimensions do not agree. How I think I would solve this is using freq1 as the x-axis for CMRR and then taking approximated points from Ad according to the value on freq1. Or I could do function approximations of Ad and Acm and then do the divide operator on those. I do not know how I would code up this two ideas. Any other ideas would helpful, especially simpler ones. Thanks

    Read the article

  • Store an image in a SQL Server CE database

    - by Vaccano
    Does any one know of an example on how to store an image in a SQL Server CE database? What data type should the column be? (I am guessing binary.) I use Linq-To-Datasets. Is it possible using that to put the image into the database and pull it out again later? Thanks for any advice.

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • Typed DataSet connection - required to have one in the .xsd file?

    - by Kyralessa
    In the .xsd file for a typed DataSet in .NET, there's a <Connections> section that contains a list of any data connections I've used to set up the DataTables and TableAdapters. There are times when I'd prefer not to have those there. For instance, sometimes I prefer to pass in a connection string to a custom constructor and use that rather than look for one in settings, .config, etc. But it seems like if I remove the connection strings from that section (leaving it empty), or remove the section entirely, the DataSet code-generation tool freaks out. Whereas if I don't remove them, the DataSet gripes when I put it in a different project because it can't find the settings for those connection strings. Is there any way I can tell a typed DataSet not to worry about any connections? (Obviously I'll have to give it a connection if I change any TableAdapter SQL or stored procs, but that should be my problem.)

    Read the article

  • How to view a DataTable while debuging

    - by Eric
    I'm just getting started using ADO.NET and DataSets and DataTables. One problem I'm having is it seems pretty hard to tell what values are in the data table when trying to debug. What are some of the easiest ways of quickly seeing what values have been saved in a DataTable? Is there someway to see the contents in Visual Studio while debugging or is the only option to write the data out to a file? I've created a little utility function that will write a DataTable out to a CSV file. Yet the the resulting CSV file created was cut off. About 3 lines from what should have been the last line in the middle of writing out a System.Guid the file just stops. I can't tell if this is an issue with my CSV conversion method, or the original population of the DataTable. Update Forget the last part I just forgot to flush my stream writer.

    Read the article

  • Updating Cells in a DataTable

    - by Maxim Z.
    I'm writing a small app to do a little processing on some cells in a CSV file I have. I've figured out how to read and write CSV files with a library I found online, but I'm having trouble: the library parses CSV files into a DataTable, but, when I try to change a cell of the table, it isn't saving the change in the table! Below is the code in question. I've separated the process into multiple variables and renamed some of the things to make it easier to debug for this question. Code Inside the loop: string debug1 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString(); string debug2 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString().Trim(); string debug3 = readIn.Rows[i].ItemArray[numColumnToCopyFrom].ToString().Trim(); string towrite = debug2 + ", " + debug3; readIn.Rows[i].ItemArray[numColumnToCopyTo] = (object)towrite; After the loop: readIn.AcceptChanges(); When I debug my code, I see that towrite is being formed correctly and everything's OK, except that the row isn't updated: why isn't it working? I have a feeling that I'm making a simple mistake here: the last time I worked with DataTables (quite a long time ago), I had similar problems. If you're wondering why I'm adding another comma in towrite, it's because I'm combining a street address field with a zip code field - I hope that's not messing anything up. My code is kind of messy, as I'm only trying to edit one file to make a small fix, so sorry.

    Read the article

  • 'cross-referencing' DataTable's

    - by Lee
    I have a DataGridView that is being filled with data from a table. Inside this table is a column called 'group' that has the ID of an individual group in another table. What I would like to do, is when the DataGridView is filled, instead of showing the ID contained in 'group', I'd like it to display the name of the group. Is there some type of VB.net 'magic' that can do this, or do I need to cross-reference the data myself? Here is a breakdown of what the 2 tables look like: table1 id group (this holds the value of column id in table 2) weight last_update table2 id description (this is what I would like to be displayed in the DGV.) BTW - I am using Visual Studio Express.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >