Search Results

Search found 2704 results on 109 pages for 'strongly typed datasets'.

Page 37/109 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • Key stroke time in Openmoko or any smart phones

    - by Adi
    Dear all, I am doing a project in which I am working on security issues related to smart phones. I want to develop an authentication scheme which is based on biometrics, Every human being have a unique key-hold time,digraph time error rate. Key-Hold Time : Time difference between pressing and releasing a key . Digraph Time : Time difference between releasing one and pressing next one. Error Rate : No of times backspace is pressed. I got these metrics from a paper "Keystroke-based User Identification on Smart Phones" by Saira Zahid1, Muhammad Shahzad1, Syed Ali Khayam1,2, Muddassar Farooq1. I was planning to get the datasets to test my algorithm from openmoko phone, but the phone is mis-behaving and I am finding trouble in generating these time data-sets. If anyone can help me or tell me a good source of data sets for the 3 metrics I defined, it will be a great help. Thanks Aditya

    Read the article

  • Matrix multiplication in java (RE-POST)

    - by Chapax
    Apologies for the re-post; the earlier time I'd posted I did not have all the details. My colleague, who quit the firm was a C# programmer, was forced to write Java code that involved (large, dense) matrix multiplication. He's coded his own DataTable class in Java, in order to be able to a) create indexes to sort and join with other DataTables b) do matrix multiplication. The code in its current form is NOT maintainable/extensible. I want to clean up the code, and thought using something like R within Java will help me focus on business logic rather than sorting, joining, matrix multiplication, etc. Plus, I'm very new to the concept of DataTable; I just want to replace the DataTable with 2D arrays, and let R handle the rest. (I currently do not know how to join 2 large datasets in java very efficiently Please let me know what you think. Also, are there any simple examples that I can take a look at?

    Read the article

  • My code works in Debug mode, but not in Release mode.

    - by Nima
    Hi, I have a code in Visual Studio 2008 in C++ that works with files just by fopen and fclose. Everything works perfect in Debug mode. and I have tested with several datasets. But it doesn't work in release mode. It crashes all the time. I have turned off all the optimizations, also there is no dependency to anything(in the linker), and also I have set these: Optimization: Disabled(/Od) Keep Unreferenced Data. Do Not Remove Redundant Optimize for Windows98: NO I still keep wondering how it should not work under these circumstances. What else should I turn off to let it works as in debug mode? I think if it works in release mode but not in debug mode, it might be a coding fault but the other way looks weird. isn't it? I appreciate any help. --Nima

    Read the article

  • T-SQL out of order insert

    - by tearman
    Basically I have a User-Defined Table Type (for use as a table-valued variable) which I'm referencing in a stored procedure that effectively just calls two other stored procedures, then inserts those values into the table type. Id est INSERT INTO @tableValuedVariable (var1, var2, var3, var4, var5) EXEC [dbo].StoredProcedure1; INSERT INTO @tableValuedVariable (var1, var2, var5) EXEC [dbo].StoredProcedure2; You can probably already tell what I'm going to ask. Basically StoredProcedure2 only returns a few of the values the table is set to hold, and I'd like those other variables to just be null (as defined as default). Only SQL is complaining that I'm not specifying all the variables available to that table. The return datasets can be quite sizable so I'd like to avoid loops and such for obvious reasons. Thanks for any help.

    Read the article

  • DB2 load partitioned data in parallel

    - by Erik Paulson
    I have a 10-node DB2 9.5 database, with raw data on each machine (ie node1:/scratch/data/dataset.1 node2:/scratch/data/dataset.2 ... node10:/scratch/data/dataset.10 There is no shared NFS mount - none of my machines could handle all of the datasets combined. each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned. Short of renaming all of my files, is there any way to get DB2 to load them all in parallel? I'm trying to do something like load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data'; but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.

    Read the article

  • Conditional Join - join 1 tables 2 ways

    - by Jon H
    I have a set of (not very well normalised or relational) tables named PLAN, GROUP, PRODUCT CLIENT Most have linkage i.e. PLAN - CLIENT on clno GROUP to PRODUCT on PRODCD However, the linkage between PLAN and GROUP is tricky. A plan has 2 field of interest GRPNO and PRODCD. What I want to do is if GRPNO != 0 then join GROUP on GRPNO. However if GRPNO = 0 then I want to join GROUP on PRODCD. The frustrating thing is that the fileds I want to return in my queries are the same across the board I just need to be able to vary the join, or join the same table twice. The best I can come up with is 2 queries and merge them using datasets, or possibly using a union. Is there a nifty way to do this in one select? I should point out I am access Foxpro over ODBC to do this. Thank you!

    Read the article

  • .NET Database Apps: Your Preferred Setup

    - by mdvaldosta
    I'm struggling to settle into a pattern for developing typical database driven apps in C# and Visual Studio. There are so many ways to set them up, using drag/drop datasets and adapters or writing the queries manually in ADO.NET or Linq to SQL, Linq to Entities, to bind or not to data bind etc etc. Where to store the connection string, in app.config or in a method or both etc etc. So many tutorials and all of them are different. Everytime I write something I start hating the way it looks and works, so I scrap it and start over. It's getting a bit tedious. Maybe it's alittle of the OCD in me. Would any of you professional developers out there share your method of setting up and structuring your database logic and maybe some sample code? It's really how to go about organizing the code and the method(s) of interacting with SQL that I'm trying to get into a routine with, one that works and won't get me laughed at by someone reviewing it.

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • Is there a distributed VCS that can manage large files?

    - by joelhardi
    Is there a distributed version control system (git, bazaar, mercurial, darcs etc.) that can handle files larger than available RAM? I need to be able to commit large binary files (i.e. datasets, source video/images, archives), but I don't need to be able to diff them, just be able to commit and then update when the file changes. I last looked at this about a year ago, and none of the obvious candidates allowed this, since they're all designed to diff in memory for speed. That left me with a VCS for managing code and something else ("asset management" software or just rsync and scripts) for large files, which is pretty ugly when the directory structures of the two overlap.

    Read the article

  • asp.net webservice user management across pages

    - by nakori
    I'm developing a site that will display confidential readonly information, with data fetched from a WCF service. My question: What is the best approach to user management across different information pages. The service returns a collection with customer info after a secure login. My idea is to have a Customer object class that is stored in session. Is it possible to use things like HttpContext.Current.User.Identity.IsAuthenticated followed by HttpContext.Current.Session["UserId"] without using a database with role-based security? Would I be better off with a combination of local database, Linq to SQL or datasets rather than using just class objects for data fetched from service? thanks, nakori

    Read the article

  • Hanging Forms When I Drag And Move My Forms

    - by hosseinsinohe
    I Write a Windows Application By C Sharp. I Use a Picture in Background of my Form (MainForm) And I Use Many Picture in Buttons in This Form,And also I Use Some Panel And Label with Transparent Background Color. My Forms,Panels And Buttons has flicker. I solve this problem by a method in this thread. But Still when other Forms Start over this Form,my Forms hangs when I Drag and Move my Forms over this Form.How can I Solve this Problem to Move And Drags my Forms easily And Speed? Edit:: My Forms Load Data From Access 2007 DataBase file.I Use Datasets,DataGridViews And Other Components to Load And show Data in My Forms.

    Read the article

  • DEADLOCK_WRAP error when using Berkeley Db in python (bsddb)

    - by JiminyCricket
    I am using a berkdb to store a huge list of key-value pairs but for some reason when i try to access some of the data later i get this error: try: key = 'scrape011201-590652' contenttext = contentdict[key] except: print the error <type 'exceptions.KeyError'> 'scrape011201-590652' in contenttext = contentdict[key]\n', ' File "/usr/lib64/python2.5/bsddb/__init__.py", line 223, in __getitem__\n return _DeadlockWrap(lambda: self.db[key]) # self.db[key]\n', 'File "/usr/lib64/python2.5/bsddb/dbutils.py", line 62, in DeadlockWrap\n return function(*_args, **_kwargs)\n', ' File "/usr/lib64/python2.5/bsddb/__init__.py", line 223, in <lambda>\n return _DeadlockWrap(lambda: self.db[key]) # self.db[key]\n'] I am not sure what DeadlockWrap is but there isnt any other program or process accessing the berkdb or writing to it (as far as i know,) so not sure how we could get a deadlock, if its referring to that. Is it possible that I am trying to access the data to rapidly? I have this function call in a loop, so something like for i in hugelist: #try to get a value from the berkdb #do something with it I am running this with multiple datasets and this error only occurs with one of them, the largest one, not the others.

    Read the article

  • DataReader Behaviour With SQL Server Locking

    - by Graham
    We are having some issues with our data layer when large datasets are returned from a SQL server query via a DataReader. As we use the DataReader to populate business objects and serialize them back to the client, the fetch can take several minutes (we are showing progress to the user :-)), but we've found that there's some pretty hard-core locking going on on the affected tables which is causing other updates to be blocked. So I guess my slightly naive question is, at what point are the locks which are taken out as a result of executing the query actually relinquished? We seem to be finding that the locks are remaining until the last row of the DataReader has been processed and the DataReader is actually closed - does that seem correct? A quick 101 on how the DataReader works behind the scenes would be great as I've struggled to find any decent information on it. I should say that I realise the locking issues are the main concern but I'm just concerned with the behaviour of the DataReader here.

    Read the article

  • Migrating shape sql to something equally powerful

    - by daRoBBie
    Hi, we are currently investigating a migration of an application that doesn't meet company standards. The application is built using VB6 and Shape SQL/Access. The application has about 120 reports by storing Shape SQL strings in a database which the user can modify using a wizard. Shape sql is not allowed at this company. We have investigated plain SQL, Linq, Entity Framework as alternatives... but all result in more complex solutions. Does anyone have another suggestion? Update: Shape SQL is an ADO command to get hierarchical datasets, for further info: http://support.microsoft.com/kb/189657

    Read the article

  • Starting to construct a data access layer. Things to consider?

    - by Phil
    Our organisation uses inline sql. We have been tasked with providing a suitable data access layer and are weighing up the pro's and cons of which way to go... Datasets ADO.net Linq Entity framework Subsonic Other? Some tutorials and articles I have been using for reference: http://www.asp.net/(S(pdfrohu0ajmwt445fanvj2r3))/learn/data-access/tutorial-01-vb.aspx http://www.simple-talk.com/dotnet/.net-framework/designing-a-data-access-layer-in-linq-to-sql/ http://msdn.microsoft.com/en-us/magazine/cc188750.aspx http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx http://www.subsonicproject.com/ I'm extremely torn, and finding it very difficult to make a decision on which way to go. Our site is a series of 2 internal portals and a public web site. We are using vs2008 sp1 and framework version 3.5. Please can you give me advise on what factors to consider and any pro's and cons you have faced with your data access layer. Thanks.

    Read the article

  • is there a tool to see the difference between two database tables in SQL Server?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • Algorithms for modern hardware?

    - by Jurily
    Once again, I find myself with a set of broken assumptions. The article itself is about a 10x performance gain by modifying a proven-optimal algorithm to account for virtual memory: What good is an O(log2(n)) algorithm if those operations cause page faults and slow disk operations? For most relevant datasets an O(n) or even an O(n^2) algorithm, which avoids page faults, will run circles around it. Are there more such algorithms around? Should we re-examine all those fundamental building blocks of our education? What else do I need to watch out for when writing my own?

    Read the article

  • Accessing database problem

    - by anarche
    Ok, so I've got an SQLiteOpenHelper that prepares and returns a SQLiteDatabase object; this is written singleton-pattern, so the database is opened once and only closed when the Activity closes. From there I have three database wrappers encapsulating the queries (some returning cursors to datasets, some inserting data) used by the Activity. I want to insert data from one View, then have that automatically update in another View on the same Dialogue. I'm writing this on a notepad and testing on the phone (since the emulator crashes the notepad...), so I can't pull up stacktraces atm. So questions: 1) are there limitations to writing to a database, when there are cursors open on the database? 2) Does a Cursor.requery()? call update the ListView with the cursor as a data connector?

    Read the article

  • CorePlot - Set y-Axis range to include all data points

    - by zdestiny
    I have implemented a CorePlot graph in my iOS application, however I am unable to dynamically size the y-Axis based on the data points in the set. Some datasets range from 10-50, while others would range from 400-500. In both cases, I would like to have y-origin (0) visible. I have tried using the scaletofitplots method: [graph.defaultPlotSpace scaleToFitPlots:[graph allPlots]]; but this has no effect on the graphs, they still show the default Y-Axis range of 0 to 1. The only way that I can change the y-Axis is to do it manually via this: graph.defaultPlotSpace.yRange = [CPTPlotRange plotRangeWithLocation:CPTDecimalFromFloat(0.0f)) length:CPTDecimalFromFloat(500.0f)]; however this doesn't work well for graphs with smaller ranges (10-50), as you can imagine. Is there any method to retrieve the highest y-value from my dataset so that I can manually set the maximum y-Axis to that value? Or is there a better alternative?

    Read the article

  • Transferring Data Between Server and Client (Mobile)

    - by Byron
    Scenario: Client (Mobile) - .Net CF 2.0, SQL CE 3.0 Server - .Net 2.0, SQL Server 2005, Web Service Client and Server database schemas differ. From server - only certain columns from certain tables need to be synced. From client - everything will need to be synced once client has made changes. Client will continually poll a web service to download and upload data. A framework will be developed to package and unpackage data, used by both client and server. How would you develop the packaging and unpackaging? Use datasets, serialise strongly typed objects? All suggestions welcome. Thanks

    Read the article

  • What data should I use to create an autofill "destination" field like Facebook or the Trip Advisor s

    - by sbar
    In order to create a “destination” auto filter input field on our website, I need a data source that provides a hierarchical data set of Region, Country, County/State, City and Town (plus an area like the Peak District National Park if at all possible) I know sites like Trip Advisor and Facebook seem to have very robust datasets for this. When you type, it brings up a match list with the hierarchy displayed (e.g. if you type Boston you get 6 results as there are multiply places called Boston – the hierarchy allows you to pick the correct option) There are many data sources out there but they either lack hierarchy or do not seem to be easily updatable or complete. I had expected this to be an easy task given the amount of site that have a destination or location autofill field. However, i cannot find a datasource or method that works. any help would be much appreciated. Tks,

    Read the article

  • function to find common rows between more than two data frames in R

    - by biohazard
    I have 4 data frames, and would like to find the rows whose values in a certain column do not exist in any of the other data frames. I wrote this function: #function to test presence of $Name in 3 other datasets common <- function(a, b, c, d) { is.B <- is.numeric(a$Name %in% b$Name) == 1 is.C <- is.numeric(a$Name %in% c$Name) == 1 is.D <- is.numeric(a$Name %in% d$Name) == 1 t <- as.numeric(is.B & is.C & is.D) t } However, the output is always t = 0. This means that it tells me that there are no unique rows in any data sets, even though the datas frames have very different numbers of rows. Since there are no duplicate rows in any of the data frames, I should be getting t = 1 for at least some rows in the biggest dataset. Can someone figure out what I got wrong?

    Read the article

  • is there a tool to see the difference between two database tables in mssql?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >