Search Results

Search found 356 results on 15 pages for 'datasets'.

Page 10/15 | < Previous Page | 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • Algorithms for modern hardware?

    - by Jurily
    Once again, I find myself with a set of broken assumptions. The article itself is about a 10x performance gain by modifying a proven-optimal algorithm to account for virtual memory: What good is an O(log2(n)) algorithm if those operations cause page faults and slow disk operations? For most relevant datasets an O(n) or even an O(n^2) algorithm, which avoids page faults, will run circles around it. Are there more such algorithms around? Should we re-examine all those fundamental building blocks of our education? What else do I need to watch out for when writing my own?

    Read the article

  • Accessing database problem

    - by anarche
    Ok, so I've got an SQLiteOpenHelper that prepares and returns a SQLiteDatabase object; this is written singleton-pattern, so the database is opened once and only closed when the Activity closes. From there I have three database wrappers encapsulating the queries (some returning cursors to datasets, some inserting data) used by the Activity. I want to insert data from one View, then have that automatically update in another View on the same Dialogue. I'm writing this on a notepad and testing on the phone (since the emulator crashes the notepad...), so I can't pull up stacktraces atm. So questions: 1) are there limitations to writing to a database, when there are cursors open on the database? 2) Does a Cursor.requery()? call update the ListView with the cursor as a data connector?

    Read the article

  • CorePlot - Set y-Axis range to include all data points

    - by zdestiny
    I have implemented a CorePlot graph in my iOS application, however I am unable to dynamically size the y-Axis based on the data points in the set. Some datasets range from 10-50, while others would range from 400-500. In both cases, I would like to have y-origin (0) visible. I have tried using the scaletofitplots method: [graph.defaultPlotSpace scaleToFitPlots:[graph allPlots]]; but this has no effect on the graphs, they still show the default Y-Axis range of 0 to 1. The only way that I can change the y-Axis is to do it manually via this: graph.defaultPlotSpace.yRange = [CPTPlotRange plotRangeWithLocation:CPTDecimalFromFloat(0.0f)) length:CPTDecimalFromFloat(500.0f)]; however this doesn't work well for graphs with smaller ranges (10-50), as you can imagine. Is there any method to retrieve the highest y-value from my dataset so that I can manually set the maximum y-Axis to that value? Or is there a better alternative?

    Read the article

  • Transferring Data Between Server and Client (Mobile)

    - by Byron
    Scenario: Client (Mobile) - .Net CF 2.0, SQL CE 3.0 Server - .Net 2.0, SQL Server 2005, Web Service Client and Server database schemas differ. From server - only certain columns from certain tables need to be synced. From client - everything will need to be synced once client has made changes. Client will continually poll a web service to download and upload data. A framework will be developed to package and unpackage data, used by both client and server. How would you develop the packaging and unpackaging? Use datasets, serialise strongly typed objects? All suggestions welcome. Thanks

    Read the article

  • function to find common rows between more than two data frames in R

    - by biohazard
    I have 4 data frames, and would like to find the rows whose values in a certain column do not exist in any of the other data frames. I wrote this function: #function to test presence of $Name in 3 other datasets common <- function(a, b, c, d) { is.B <- is.numeric(a$Name %in% b$Name) == 1 is.C <- is.numeric(a$Name %in% c$Name) == 1 is.D <- is.numeric(a$Name %in% d$Name) == 1 t <- as.numeric(is.B & is.C & is.D) t } However, the output is always t = 0. This means that it tells me that there are no unique rows in any data sets, even though the datas frames have very different numbers of rows. Since there are no duplicate rows in any of the data frames, I should be getting t = 1 for at least some rows in the biggest dataset. Can someone figure out what I got wrong?

    Read the article

  • is there a tool to see the difference between two database tables in mssql?

    - by reinier
    What is a good tool to see the differences between 2 tables (or even better, the datasets returned by 2 queries). EDIT: I'm not interested in the schema changes. Just assume that the schemas are the same. background as to why: I'm porting some legacy code which can fill a database with some pre-calced data. The easiest way to see if I got everything right, is to check the output of the old program, with the new one. I was thinking that if there is some kind of 'diff' tool for databases, this might be great.

    Read the article

  • What data should I use to create an autofill "destination" field like Facebook or the Trip Advisor s

    - by sbar
    In order to create a “destination” auto filter input field on our website, I need a data source that provides a hierarchical data set of Region, Country, County/State, City and Town (plus an area like the Peak District National Park if at all possible) I know sites like Trip Advisor and Facebook seem to have very robust datasets for this. When you type, it brings up a match list with the hierarchy displayed (e.g. if you type Boston you get 6 results as there are multiply places called Boston – the hierarchy allows you to pick the correct option) There are many data sources out there but they either lack hierarchy or do not seem to be easily updatable or complete. I had expected this to be an easy task given the amount of site that have a destination or location autofill field. However, i cannot find a datasource or method that works. any help would be much appreciated. Tks,

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Build SUM based daily record

    - by ximarin
    I have a problem building an aggregate function. Here's my problem: I have a table like this id action day isSum difference 1 ping 2012-01-01 1 500 (this is the sum of the differences from last year) 2 ping 2012-01-01 0 -2 3 ping 2012-01-02 0 1 4 ping 2012-01-03 0 -4 5 ping 2012-01-04 0 -2 6 ping 2012-01-05 0 3 7 ping 2012-01-06 0 2 8 ping 2012-01-01 1 0 (this is the sum of the differences from last year, now for pong) 9 pong 2012-01-01 0 -5 10 pong 2012-01-02 0 2 11 pong 2012-01-03 0 -2 12 pong 2012-01-04 0 -8 13 pong 2012-01-05 0 3 14 pong 2012-01-06 0 4 I now need to select the action, day and the summarized difference since 01-01 for every day, so that my result looks like this action day total ping 2012-01-01 498 ping 2012-01-02 499 ping 2012-01-03 495 ping 2012-01-04 493 ping 2012-01-05 496 ping 2012-01-06 498 pong 2012-01-01 - 5 pong 2012-01-02 - 3 pong 2012-01-03 - 5 pong 2012-01-04 -13 pong 2012-01-05 -10 pong 2012-01-06 - 6 How can I do this? there a are a lot of datasets (~1 million), so the query needs to be pretty cheap. I don't know how the use sum to get daily sums for daily records depending on the action-column.

    Read the article

  • Why does tokyo tyrant slow down exponentially even after adjusting bnum?

    - by HenryL
    Has anyone successfully used Tokyo Cabinet / Tokyo Tyrant with large datasets? I am trying to upload a subgraph of the Wikipedia datasource. After hitting about 30 million records, I get exponential slow down. This occurs with both the HDB and BDB databases. I adjusted bnum to 2-4x the expected number of records for the HDB case with only a slight speed up. I also set xmsiz to 1GB or so but ultimately I still hit a wall. It seems that Tokyo Tyrant is basically an in memory database and after you exceed the xmsiz or your RAM, you get a barely usable database. Has anyone else encountered this problem before? Were you able to solve it?

    Read the article

  • Methodologies or algorithms for filling in missing data

    - by tbone
    I am dealing with datasets with missing data and need to be able to fill forward, backward, and gaps. So, for example, if I have data from Jan 1, 2000 to Dec 31, 2010, and some days are missing, when a user requests a timespan that begins before, ends after, or encompasses the missing data points, I need to "fill in" these missing values. Is there a proper term to refer to this concept of filling in data? Imputation is one term, don't know if it is "the" term for it though. I presume there are multiple algorithms & methodologies for filling in missing data (use last measured, using median/average/moving average, etc between 2 known numbers, etc. Anyone know the proper term for this problem, any online resources on this topic, or ideally links to open source implementations of some algorithms (C# preferably, but any language would be useful)

    Read the article

  • PHP / Zend Framework: Which object would handle a complex table join?

    - by Thomas
    I think one of the more difficult concepts to understand in the Zend Framework is how the Table Data Gateway pattern is supposed to handle multi-table joins. Most of the suggestions I've seen claim that you simply handle the joins using a $db-select()... Zend DB Select with multiple table joins Joining Tables With Zend Framework PHP Joining tables wthin a model in Zend Php Zend Framework Db Select Join table help Zend DB Select with multiple table joins My question is: Which object is best suited to handle this kind of multi-table select statement? I feel like putting it in the model would break the 1-1 Table Data Gateway pattern between the class and the db table. Yet putting it in the controller seems wrong because why would a controller handle a SQL statement? Anyway, I feel like ZF makes handling datasets from multiple tables more difficult than it needs to be. Any help you can provide is great... Thanks!

    Read the article

  • sqlite - Foreign keys in VS2008 Designer

    - by rene marxis
    Hello I'm starting over to use strong typed datasets in VS 2008 with sqlite and running into a problem. I have some tables that have foreign keys allready defined in the database. I can see those in the Server-Explorer. Now i create a new strong typed Dataset with the designer and add only one table from that realtion to the dataset. Then i like to add the second one and i get an error message "Unexpected error ... Source: Microsoft.VSDesigner; ErrorCode:-1" No Additional Info. The error does not occure if i add both tables at the same time (say i drag them from the serverexplorer). Is there any way to add subsequent tables to an dataset that are in relation(s) to alreay added once? Many thanks _rene

    Read the article

  • update each recordset with datareader

    - by knittl
    hi, my situation is the following: i have a datareader and loop over all records returned by a select statement and then call a function with a value from that row. but now i need to update a column in each row, after the function has been called. using a separate update statement seems like a huge overkill. what's the best method to do so? i've heard about dataadapters and datasets—but the only thing i know is that they exist, not how to use them (properly) in this case. platform is c# with sql server

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • Generate a commutative hash based on three sets of numbers?

    - by DarkAmgine
    I need to generate a commutative hash based on three sets of "score" structs. Each score has a "start", an "end" and a "number". Both start and end are usually huge numbers (8-9 digits) but number is just from 1 to 4. I need them to be commutative so the order does not matter. I'm using XOR at the moment but it seems to be giving bad results. Since I'm working with large large datasets, I'd prefer a performance-friendly solution. Any suggestions? Thanks =] public static int getCustomHash(cnvRegion c1, cnvRegion c2, cnvRegion c3) { int part1 = (c1.startLocation * c2.startLocation * c3.startLocation); int part2 = (c1.endLocation * c2.endLocation * c3.endLocation); int part3 = (c1.copyNumber + c2.copyNumber + c3.copyNumber)*23735160; return part1 ^ part2 ^ part3; }

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

  • How do I effectively store a connection string in machine.config only?

    - by Scott Bedwell
    We are moving to an environment with multiple engines of MS SQL running on the same server (a test engine and a production engine). We also have separate test and production web servers, and would like for our asp.net applications to "magically" use the test database engine on the test web server and the production database engine on the production web servers. We would like to store the connection strings in machine.config rather than in web.config, but when we put it in machine.config, visual studio's IDE (particularly with datasets) does not recognize that the machine.config contains the connection. Does anyone know of a solution for displaying these machine.config connection strings in visual studio, or of a different solution that would accommodate for this? Thanks.

    Read the article

  • How to obtain multiple lines in a single density plot, with a corrected scale?

    - by user1677055
    I have recently started working with microarray datasets and am trying to get my hands on R. I wish to make some plots out of my result data, but however I am stuck at the following. I have the following data (myData), cpg samp1 samp2 samp3 cpg1 0.43 0.32 0.21 cpg2 0.43 0.22 1.00 cpg3 0.11 0.99 0.78 cpg4 0.65 0.32 0.12 cpg5 0.11 0.43 0.89 And I wish to obtain a density plot for this, I did the following, plot (density(MyData$samp1), col="red") lines (density(MyData$samp2), col="green") lines (density(MyData$samp3), col="blue") But doing this does not give me correct plots, because not all sample curves fit within the plot limits. I did try looking for answers, but honestly i am still not able to work this out. Can you help me know how do i set my scale for the above? Or what additional should I do to the above code, so that all the curves are in range?? I have got many samples, so i need a something that could also automatically assign a different colour curve for each of my sample, after scaling it right. Thanks in advance..

    Read the article

  • Return no records if FIndKey results in False?

    - by jwilfong
    Using TDataSet.FindKey you can locate records. When it results in True the datasets cursor will be positioned on the found record. When it results in False the cursor is not moved. This results in the record data prior to FindKey being issued being displayed in data aware components. How can I code the result of FindKey to return an empty record? if Not tblSomeTable.FindKey([SomeSearchData]) then begin < code to return empty or move data cursor to neutral position > end; Thanks, John

    Read the article

  • Dynamically add rows to listbox

    - by Ivan S
    I have a list box that displays information off of a column of a dataset. I would like the number of rows displayed to be all the rows that are in the dataset (the number of datasets in the rows vary). I'm figuring it has something to do with ListBox.Rows = Dataset.Tables[0].Rows.Count; But it seems to just always default to 4 even when it is only 2. This is what I have in my aspx.cs file. pirateBox.DataTextField = Pirateship.Tables[0].Columns["displayName"].ToString(); pirateBox.DataValueField = pirateship.Tables[0].Columns["PKID"].ToString(); pirateBox.DataSource = pirateship.Tables[0]; pirateBox.DataBind(); pirateBox.Rows = pirateship.Tables[0].Rows.Count; I've been trying a few things and this is what I have so far in .aspx <asp:ListBox ID="pirateBox" runat="server" Rows="1"></asp:ListBox>

    Read the article

  • How to view a DataTable while debuging

    - by Eric
    I'm just getting started using ADO.NET and DataSets and DataTables. One problem I'm having is it seems pretty hard to tell what values are in the data table when trying to debug. What are some of the easiest ways of quickly seeing what values have been saved in a DataTable? Is there someway to see the contents in Visual Studio while debugging or is the only option to write the data out to a file? I've created a little utility function that will write a DataTable out to a CSV file. Yet the the resulting CSV file created was cut off. About 3 lines from what should have been the last line in the middle of writing out a System.Guid the file just stops. I can't tell if this is an issue with my CSV conversion method, or the original population of the DataTable. Update Forget the last part I just forgot to flush my stream writer.

    Read the article

  • Design guide-lines for writing a Typed SQL Statement API ?

    - by this. __curious_geek
    Last night I came up to sometihng intersting while designing my new project that brought me to ask this qustion here. My project is supposed to follow Table Gateway pattern using tradional ADO.Net datasets for data access. I don't want to write plain queries in my data-access classes. So I came up with an idea of writing a parser kindaa api that exposes objects and methods to generate queries on the move based on my domain objects. Later I want this api to hook up to my Business objects and provide Typed SQL generator api right on the business object instances. Any idea or references how can I do this ? This seems very wide to start with that I'm compelled take your opinions here. Does there anything already exists that can do this ?

    Read the article

  • query in query builder in a Table Adapter

    - by Sony
    I am working with the datasets of .net I have an Oracle Query which is working fine . but I copy the query as sql statement within Table Adapter wizard and after I clicked the Query Builder button ,there is SQL syntax error. The query is below: SELECT lead_id, NAME, ADDRESS, CITY, EMAIL, PHONE, PINCODE, STATE, QUALIFICATION, DOB, status FROM (SELECT l.lead_id, l.NAME, l.ADDRESS, l.CITY, l.EMAIL, l.PHONE, l.PINCODE, l.STATE, l.QUALIFICATION, l.DOB, CASE WHEN s.status IS NULL THEN 'Not Updated !' ELSE s.status END status, row_number() over(PARTITION BY l.lead_id ORDER BY t .CREATED_DATE DESC) rn FROM LEADS l JOIN Leads lc ON l.USER_ID = lc.USER_ID AND l.USER_ID = :iuser_id AND(l.CREATED_DATE BETWEEN (TO_DATE(:ifrom_date , 'dd-mm-yyyy') ) AND (TO_DATE (:ito_date, 'dd-mm-yyyy' ) )) LEFT JOIN LEADTRANSACTION t ON l.lead_id = t .lead_id LEFT JOIN STATUS s ON s.STATUS_ID = t .STATUS_ID) WHERE rn = 1;

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15  | Next Page >