Search Results

Search found 356 results on 15 pages for 'datasets'.

Page 12/15 | < Previous Page | 8 9 10 11 12 13 14 15  | Next Page >

  • Open source GIS tools

    - by TRV_SQL
    Hi I’ve been looking at Open Source GIS tools. In particular MapServer and GeoServer. The problem I’m seeing is that to actually deploy these to the public you can’t use a regular $5/ month (or free) hosting service because you have to install these services on the server in ways that are not accessible in the average hosting scheme. So you either have to use a host that has MapServer installed (many of which look unreliable) or have a dedicated server or VPS. All of these options have a significant cost barrier ($30 - $200/month). I’m just doing this for fun. Are there any free or inexpensive ways to have your GIS services hosted? Or are there any products that install in a way that you don’t need to access the root structure of the server? I have tried OpenLayers and GeoExt but I don’t think a client side option will work for me because of the size of the datasets I am using. My base data will be vector data not WMS data (or something similar). I haven’t tried Google maps yet, but I will be looking into it. Also, any thought on using SVG for GIS purposes? Thanks

    Read the article

  • SVM Classification - minimum number of input sets for each class

    - by Amol Joshi
    Im trying to build an app to detect images which are advertisements from the webpages. Once I detect those Ill not be allowing those to be displayed on the client side. From the help that I got here in stackoverflow, I thought SVM is the best approach to my aim. So, I have coded SVM and an SMO myself. The dataset which I have got from UCI data repository has 3280 instances ( Link to Dataset- http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements )where around 400 of them are from class representing Advertisement images and rest of them representing non-advertisement images. Right now Im taking the first 2800 input sets and training the SVM. But after looking at the accuracy rate I realised that most of those 2800 input sets are from non-advertisement image class. So Im getting very good accuracy for that class. So what can I do here? About how many input set shall I give to SVM to train and how many of them for each class? Thanks. Cheers. ( Basically made a new question because the context was different from my previous question. http://stackoverflow.com/questions/1991113/optimization-of-neural-network-input-data )

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Displaying single-instance business object data on SSRS

    - by Lachlan
    I have a SQL Server Reporting Services local (i.e. RDLC) report displayed in a ReportViewer, with two subreports. I am using business objects to populate the datasets. What is the best way to populate single-instance data on my report, e.g. a dynamic title, or a text box that lists a calculated value, not based on the report data? I am currently displaying data using the following style: public class MyRecordList { string Name { get; set; } List<MyRecord> Records { get; set;} } public MyRecord { string Description { get; set;} string Value { get; set;} } I set the datasource to the Records in an instance of MyRecordsList, and they print out find in a table. But adding a textbox and and referring to Name displays nothing. I also tried turning Name into a List, and referring to the first in the list, using: =First(Fields!Name.Value, "Report1_MyRecordList") but still nothing is printed on the report.

    Read the article

  • OleDb database to DataSet and back in c#?

    - by Troy
    I'm writing a program that lets a user: Connect to an (arbitrary) View all of the tables in that database in separate DataGridViews Edit them in the program, generate random data, and see the results Choose to commit those changes or revert So I discovered the DataSet class, which looks like it's capable of holding everything that a database would, and I decided that the best thing to do here would be to load everything into one dataset, let the user edit it, and then save the dataset back to the database. The problem is that the only way I can find to load the database tables is this: set = new DataSet(); DataTable schema = connection.GetOleDbSchemaTable( OleDbSchemaGuid.Tables, new string[] { null, null, null, "TABLE" }); foreach (DataRow row in schema.Rows) { string tableName = row.Field<string>("TABLE_NAME"); DataTable dTable = new DataTable(); new OleDbDataAdapter("SELECT * FROM " + tableName, connection).Fill(dTable); dTable.TableName = tableName; set.Tables.Add(dTable); } while it seems like there should be a simpler way given that datasets appear to be designed for exactly this purpose. The real problem though is when I try saving these things. In order to use the OleDbDataAdapter.Update() method, I'm told that I have to provide valid INSERT queries. Doesn't that kind of negate the whole point of having a class to handle this stuff for me? Anyway, I'm hoping somebody can either explain how to load and save a database into a dataset or maybe give me a better idea of how to do what I'm trying to do. I could always parse the commands together myself, but that doesn't seem like the best solution.

    Read the article

  • Updating multiple related tables in SQLite

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Updating multiple related tables in SQLite with C#

    - by PerryJ
    Just some background, sorry so long winded. I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency. I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure. The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine: Region table with auto increment RegionID, RegionName column and various optional columns. District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point. Once again, not being the greatest with this, my thoughts would be something like: Check to see if Region exists and if so get the RegionID else create it and get RegionID Check to see if District exists and if so get the DistrictID else create it adding in RegionID from above and get DistrictID Check to see if Person exists and if so get the PersonID else create it adding in DistrictID from above and get PersonID Update Person with rest of data. In MS SQL Server I would create a stored procedure to handle all this. Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.

    Read the article

  • Viewstate in a .ashx Handler?

    - by Matt Dawdy
    I've got a handler (list.ashx for example) that has a method that retrieves a large dataset, then grabs only the records that will be shown on any given "page" of data. We are allowing the users to do sorting on these results. So, on any given page run, I will be retrieving a dataset that I just got a few seconds/minutes ago, but reordering them, or showing the next page of data, etc. My point is that my dataset really hasn't changed. Normally, the dataset would be stuck into the viewstate of a page, but since I'm using a handler, I don't have that convenience. At least I don't think so. So, what is a common way to store the viewstate associated with a current user's given page when using a handler? Is there a way to take the dataset, encode it somehow and send that back to the user, and then on the next call, pass it back and then rehydrate a dataset from those bits? I don't think Session would be a good place to store it since we might have 1000 users all viewing different datasets of different data, and that could bring the server to its knees. At least I think so. Does anyone have any experience with this kind of situation, and can you give me any advice?

    Read the article

  • Replicating SQL's 'Join' in Python

    - by Daniel Mathews
    I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function. I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key': join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Join arrays r1 and r2 on key key. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html Any pointers or help will be most appreciated!

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • VS2008 DataSet Wizard doesn't match tables for updating

    - by James H
    Hi all, first question ever on this site. I've been having a real stubborn problem using Visual Studio 2008 and I'm hoping someone has figured this out before. I have 2 libraries and 1 project that use strongly typed datasets (MSSQL backend) that I generated using the "Configure DataSet with Wizard" option on in Data Sources. I've had them working just fine for awhile and I've written a lot of code in the non-designer file for the row classes. I've also specified a lot of custom queries using the dataset designer. This is all work I can't afford to loose. I've recently made some changes to re-organize my libraries which included changing the names of the libraries themselves. I also changed the connection string to point to a different database which is a development copy (same exact schema). Problem is now when I open up "Configure DataSet with Wizard" to pickup a new column I've added to one of the tables it no longer matches the tables correctly in the wizard. The wizard displays all of the tables in the database and none of them have check boxes next to them (ie: are not part of this dataset). Below those it shows all of the tables again but with red Xs and these are checked. Basically meaning that Visual Studio sees all of the tables it currently has in the DataSet and sees all of the tables in the database, but believes they are no longer the same and thus do not match! I've had this same thing happen quite awhile back and I think I just re-built the xsd from scratch and manually copied the code over and then had to redefine all of the custom queries I built in the dataset designer. That's not a good solution. I'm looking for 2 answers: 1. What causes this to happen and how to prevent it. 2. How do I fix this so that the wizard once again believes the tables in its xsd are the same tables that are in the database (yes, they have the exact same names still). Thanks.

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • How to automatically read in calculated values with PHPExcel?

    - by Edward Tanguay
    I have the following Excel file: I read it in by looping over every cell and getting the value with getCell(...)->getValue(): $highestColumnAsLetters = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestColumn(); //e.g. 'AK' $highestRowNumber = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestRow(); $highestColumnAsLetters++; for ($row = 1; $row < $highestRowNumber + 1; $row++) { $dataset = array(); for ($columnAsLetters = 'A'; $columnAsLetters != $highestColumnAsLetters; $columnAsLetters++) { $dataset[] = $this->objPHPExcel->setActiveSheetIndex(0)->getCell($columnAsLetters.$row)->getValue(); if ($row == 1) { $this->column_names[] = $columnAsLetters; } } $this->datasets[] = $dataset; } However, although it reads in the data fine, it reads in the calculations literally: I understand from discussions like this one that I can use getCalculatedValue() for calculated cells. The problem is that in the Excel sheets I am importing, I do not know beforehand which cells are calculated and which are not. Is there a way for me to read in the value of a cell in a way that automatically gets the value if it has a simple value and gets the result of the calculation if it is a calculation? Answer: It turns out that getCalculatedValue() works for all cells, makes me wonder why this isn't the default for getValue() since I would think one would usually want the value of the calculations instead of the equations themselves, in any case this works: ...->getCell($columnAsLetters.$row)->getCalculatedValue();

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • DroDownlist in DataGrid produces Error

    - by S Nash
    I have a DataGrid and everytime I change value of it's embeded dropdwonlist I get: "System.Web.HttpException: The IListSource does not contain any data sources." I have a data grid which loads fine: Name column is editable. I have a dropdownlist these : DataDataGrid gets its value from a table. (Select Name, Address From Persons) Dropdown list also gets list of names from the same table. So DataGrid and dropdownlist are bound to 2 different datasets. Here is my code for dataGrid" <Columns> <ASP:ButtonColumn Text="Delete" CommandName="Delete"></ASP:ButtonColumn> <asp:EditCommandColumn ButtonType="LinkButton" UpdateText="Update" CancelText="Cancel" EditText="Edit"></asp:EditCommandColumn> <ASP:TemplateColumn HeaderText="Name" SortExpression="FY" HeaderStyle-HorizontalAlign="center" HeaderStyle-Wrap="True"> <ItemStyle Wrap="false" HorizontalAlign="left" /> <ItemTemplate> <ASP:Label ID="Name" Text='<%# DataBinder.Eval(Container.DataItem, "Name") %>' runat="server"/> </ItemTemplate> <EditItemTemplate> <ASP:DropDownList id="ddlName" cssClass="DropDownList" runat="server" datasource="<%#allNames%>" DataTextField= "Name" DataValueField="ID" Defaultvalue='<%# DataBinder.Eval(Container.DataItem, "ID") %>' OnPreRender="SetDefaultListItem" accessKey="I" AutoPostBack="true" /> </EditItemTemplate> </ASP:TemplateColumn> <asp:BoundColumn DataField="Address" ReadOnly="True" HeaderText="Address"></asp:BoundColumn> </Columns> Error happens here : dg.DataSource = ds dg.DataBind() Any ideas how to solve this issues? All I want is a DataGrid with a editable column ,which can be edited by choosing one of the value of in a dropdownlist.

    Read the article

  • .NET - Is there a way to programmatically fill all tables in a strongly-typed dataset?

    - by Mike Loux
    Hello, all! I have a SQL Server database for which I have created a strongly-typed DataSet (using the DataSet Designer in Visual Studio 2008), so all the adapters and select commands and whatnot were created for me by the wizard. It's a small database with largely static data, so I would like to pull the contents of this DB in its entirety into my application at startup, and then grab individual pieces of data as needed using LINQ. Rather than hard-code each adapter Fill call, I would like to see if there is a way to automate this (possibly via Reflection). So, instead of: Dim _ds As New dsTest dsTestTableAdapters.Table1TableAdapter.Fill(_ds.Table1) dsTestTableAdapters.Table2TableAdapter.Fill(_ds.Table2) <etc etc etc> I would prefer to do something like: Dim _ds As New dsTest For Each tableName As String In _ds.Tables Dim adapter as Object = <routine to grab adapter associated with the table> adapter.Fill(tableName) Next Is that even remotely doable? I have done a fair amount of searching, and I wouldn't think this would be an uncommon request, but I must be either asking the wrong question, or I'm just weird to want to do this. I will admit that I usually prefer to use unbound controls and not go with strongly-typed datasets (I prefer to write SQL directly), but my company wants to go this route, so I'm researching it. I think the idea is that as tables are added, we can just refresh the DataSet using the Designer in Visual Studio and not have to make too many underlying DB code changes. Any help at all would be most appreciated. Thanks in advance!

    Read the article

  • How to automatically read in calculated values with PHPExcel?

    - by Edward Tanguay
    I have the following Excel file: I read it in by looping over every cell and getting the value with getCell(...)->getValue(): $highestColumnAsLetters = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestColumn(); //e.g. 'AK' $highestRowNumber = $this->objPHPExcel->setActiveSheetIndex(0)->getHighestRow(); $highestColumnAsLetters++; for ($row = 1; $row < $highestRowNumber + 1; $row++) { $dataset = array(); for ($columnAsLetters = 'A'; $columnAsLetters != $highestColumnAsLetters; $columnAsLetters++) { $dataset[] = $this->objPHPExcel->setActiveSheetIndex(0)->getCell($columnAsLetters.$row)->getValue(); if ($row == 1) { $this->column_names[] = $columnAsLetters; } } $this->datasets[] = $dataset; } However, although it reads in the data fine, it reads in the calculations literally: I understand from discussions like this one that I can use getCalculatedValue() for calculated cells. The problem is that in the Excel sheets I am importing, I do not know beforehand which cells are calculated and which are not. Is there a way for me to read in the value of a cell in a way that automatically gets the value if it has a simple value and gets the result of the calculation if it is a calculation?

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • How to use a VB usercontrol on a C# page?

    - by ks78
    Hopefully someone will be able to point me in the right direction. I've created a usercontrol in VB that handles paging more efficiently than the DataPager (at least for very large datasets). I'd like to use it in a C# project, but I've been having trouble getting it to work. I've tried simply adding PagingControl.ascx to the C# project, but when I do that the markup and VB code behind don't seem to see each other. --Is this a namespace issue? I've tried adding the PagingControl.ascx to its own VB project, then adding that project to the C# project's solution, as well as a reference. --That almost works. I can register the PagingControl usercontrol in the markup. I can access the usercontrol's properties in the code behind, but any property that involves the UI of the usercontrol fails. Its seems as if the usercontrol's form hasn't had a chance to load by the time the C# page's Page_Load event handler fires. --Maybe this is an "order of operations" problem? At what point in the C# page's lifetime should a usercontrol's form be loaded? If anyone has any ideas or insight, I'd really appreciate it. Thanks in advance.

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • SqlCeResultSet Problem

    - by Vlad
    Hello, I have a SmartDevice project (.NetCF 2.0) configured to be tested on the USA Windows Mobile 5.0 Pocket PC R2 Emulator. My project uses SqlCe 3.0. Understanding that a SmartDevice project is "more carefull" with the device's memory I am using SqlCeResultSets. The result sets are strongly typed, autogenerated by Visual Studio 2008 using the custom tool MSResultSetGenerator. The problem I am facing is that the result set does not recognize any column names. The autogenerated code for the fields does not work. In the client code I am using InfoResultSet rs = new InfoResultSet(); rs.Open(); rs.ReadFirst(); string myFormattedDate = rs.MyDateColumn.ToString("dd/MM/yyyy"); When the execution on the emulator reaches the rs.MyDateColumn the application throws an System.IndexOutOfRangeException. Investigating the stack trace at System.Data.SqlServerCe.FieldNameLookup.GetOrdinal() at System.Data.SqlServerCe.SqlCeDataReader.GetOrdinal() I've tested the GetOrdinal method (in my autogenerated class that inherits SqlCeResultSet): this.GetOrdinal("MyDateColumn"); // throws an exception this.GetName(1); // returns "MyDateColumn" this.GetOrdinal(this.GetName(1)); //throws an exception :) [edit added] The table exists and it's filled with data. Using typed DataSets works like a charm. Regenerating the SqlCeResultSet does not solve the issue, the problem remains. The problem basically is that I am not able to access a column by it's name. The data can be accessed trough this.GetDateTime(1)using the column ordinal. The application fails executing this.GetOrdinal("MyDateColumn"). Also I have updated Visual Studio 2008 to Service Pack 1. Additionaly I am developing the project on a virtual machine with Windows XP SP 2, but in my opinion if the medium is virtual or not should have no effect on the developing. Am I doing something wrong or am I missing something? Thank you.

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • Interpolating data points in Excel

    - by Niels Basjes
    Hi, I'm sure this is the kind of problem other have solved many times before. A group of people are going to do measurements (Home energy usage to be exact). All of them will do that at different times and in different intervals. So what I'll get from each person is a set of {date, value} pairs where there are dates missing in the set. What I need is a complete set of {date, value} pairs where for each date withing the range a value is known (either measured or calculated). I expect that a simple linear interpolation would suffice for this project. If I assume that it must be done in Excel. What is the best way to interpolate in such a dataset (so I have a value for every day) ? Thanks. NOTE: When these datasets are complete I'll determine the slope (i.e. usage per day) and from that we can start doing home-to-home comparisons. ADDITIONAL INFO After first few suggestions: I do not want to manually figure out where the holes are in my measurement set (too many incomplete measurement sets!!). I'm looking for something (existing) automatic to do that for me. So if my input is {2009-06-01, 10} {2009-06-03, 20} {2009-06-06, 110} Then I expect to automatically get {2009-06-01, 10} {2009-06-02, 15} {2009-06-03, 20} {2009-06-04, 50} {2009-06-05, 80} {2009-06-06, 110} Yes, I can write software that does this. I am just hoping that someone already has a "ready to run" software (Excel) feature for this (rather generic) problem.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15  | Next Page >