Search Results

Search found 3135 results on 126 pages for 'typed dataset'.

Page 102/126 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • How to convert Big Endian and how to flip the highest bit?

    - by Robert Frank
    I am using a TStream to read binary data (thanks to this post: http://stackoverflow.com/questions/2878180/how-to-use-a-tfilestream-to-read-2d-matrices-into-dynamic-array). My next problem is that the data is Big Endian. From my reading, the Swap() method is seemingly deprecated. How would I swap the types below? 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point - Are IEEE affected by Big Endian? And, finally, since the data is unsigned, the creators of this dataset have stored the unsigned values as signed integers (excluding the IEEE). They instruct that one need only add an offset (2^15, 2^31, and 2^63) to recover the unsigned data. But, they note that flipping the most significant bit is the fastest way to do that. How does one efficiently flip the most significant bit of a 16, 32, or 64-bit integer? So, if the data on disk (16-bit) is "85 FB" - the desired result after reading the data and swapping and bit flipping would be 1531. Is there a way to accomplish the swapping and bit flipping with generics so it fits into the generic answer at the link above? Yes, kids, THIS is how scientific astronomical data is stored by NASA, ESO, and all professional astronomers. This FITS standard is considered by some to be one of the most successful standards ever created in its proliferation and flexibility!

    Read the article

  • Elegant code question: How to avoid creating unneeded object

    - by SeaDrive
    The root of my question is that the C# compiler is too smart. It detects a path via which an object could be undefined, so demands that I fill it. In the code, I look at the tables in a DataSet to see if there is one that I want. If not, I create a new one. I know that dtOut will always be assigned a value, but the the compiler is not happy unless it's assigned a value when declared. This is inelegant. How do I rewrite this in a more elegant way? System.Data.DataTable dtOut = new System.Data.DataTable(); . . // find table with tablename = grp // if none, create new table bool bTableFound = false; foreach (System.Data.DataTable d1 in dsOut.Tables) { string d1_name = d1.TableName; if (d1_name.Equals(grp)) { dtOut = d1; bTableFound = true; break; } } if (!bTableFound) dtOut = RptTable(grp);

    Read the article

  • When should I implement globalization and localization in C#?

    - by Geo Ego
    I am cleaning up some code in a C# app that I wrote and really trying to focus on best practices and coding style. As such, I am running my assembly through FXCop and trying to research each message it gives me to decide what should and shouldn't be changed. What I am currently focusing on are locale settings. For instance, the two errors that I have currently are that I should be specifying the IFormatProvider parameter for Convert.ToString(int), and setting the Dataset and Datatable locale. This is something that I've never done, and never put much thought into. I've always just left that overload out. The current app that I am working on is an internal app for a small company that will very likely never need to run in another country. As such, it is my opinion that I do not need to set these at all. On the other hand, doing so would not be such a big deal, but it seems like it is unneccessary and could hinder readability to a degree. I understand that Microsoft's contention is to use it if it's there, period. Well, I'm technically supposed to call Dispose() on every object that implements IDisposable, but I don't bother doing that with Datasets and Datatables, so I wonder what the practice is "in the wild."

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • How to programmatically set the text of a cell of database in VB.Net?

    - by manuel
    I have a Microsoft Access database file connected to VB.NET. In the database table, I have a 'No.' and a 'Status' column. Then I have a textbox where I can input an integer. I also have a button, which will change the data cell in 'Status' where 'No.' = txtTitleNo.Text(), when I click it. Let's say I want the 'Status' cell to be changed to "Reserved". What codes should I put in the BtnReserve_Click? Here is the code: Public Class theForm Dim con As New OleDb.OleDbConnection Dim ds As New DataSet Dim daTitles As OleDb.OleDbDataAdapter Private Sub theForm_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load con.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=|DataDirectory|\db1.mdb" con.Open() daTitles = New OleDb.OleDbDataAdapter("SELECT * FROM Titles", con) daTitles.Fill(ds, "Titles") DataGridView1.DataSource = ds.Tables("Titles") DataGridView1.AutoResizeColumns() End Sub Private Sub BtnReserve_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BtnReserve.Click Dim daReserve As OleDb.OleDbCommand For i As Integer = 1 TReservedo DataGridView1.RowCount() If i = Val(txtTitleNo.Text()) Then daReserve = New OleDb.OleDbCommand("UPDATE Titles SET Status = 'Reserved' WHERE No = '" & i & "'", con) daReserve.ExecuteNonQuery() End If Next Dim cb As New OleDb.OleDbCommandBuilder(daTitles) daTitles.Update(ds, "Titles") End Sub This code still does not change the 'Status'. What should I do?

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • NHibernate / multiple sessions and nested objects

    - by bernhardrusch
    We are using NHibernate in a rich client application. It is a pretty open application (the user searches for a dataset or creates a new one, changes the data and saves the data set. We leave the session open, because sometimes we have to lazy load some properties of the object (nested object structure). This means one big problem if we leave the session open, the db (MySQL) closes the connection and we are not able to find this out and it throws an exception (socket communication error) when accessing the database (we are thinking about testing the db connection before accessing the object - but this is not really optimal neither, the other option would be to set back the timeout of the db connection , but this just doesn't seem to well). So - is it possible to reconnect the session to a new database connection ? Another problem is it possible to get an object from one session and then re-attach it to another session ? (I often hear that session.lock should work for this - but this doesn't work so well in our application - so I ended up getting a "fresh" object from the session and copy the data over manually - which is a little bit cumbersome) Any ideas for this ?

    Read the article

  • Why is a DataViewManager not sorting?

    - by Alarion
    First off, a quick rundown of what the code is doing. I have two tables: Companies and Clients. There is a one to many relationship between Companies and Clients (one company can have many clients). In some processing logic, I have both tables loaded into a DataSet - each are DataTables. I have added a DataRelation to set the relationship, and it works when I use GetChildRows on a record from the Companies table. However, I need to sort the records returned. After googling around some it looks like the DataViewManager is the way to go, and I have examined several basic examples. However, I cannot get my rows sorted. Am I missing something, or am I not using this how it is supposed to be used? Example code: Dim ldvmManager As New DataViewManager(mdsData) ldvmManager.DataViewSettings("Companies").Sort = "comName ASC" ldvmManager.DataViewSettings("Clients").ApplyDefaultSort = False ldvmManager.DataViewSettings("Clients").Sort = "entLastName ASC, entFirstName ASC" For Each mdrCurrentCompany in ldvmManager.DataViewSettings("Companies").Table.Rows Dim ldrClients() as DataRow = mdrCurrentCompany.GetChildRows("CompaniesClients") Next No matter what I do, the first Client returned has a last name that starts with a 'B' and the second has one that starts with a 'A'. From there, the order is all mixed up. If I use my entire data instead of the subset I was using to test with, the first Client returned has a last name that starts with 'J'. It seems like it is still using whatever default sort was being used before I tried to use the DataViewManager. Any ideas?

    Read the article

  • Looping through covariates in regression using R

    - by Kyle Peyton
    I'm trying to run 96 regressions and save the results as 96 different objects. To complicate things, I want the subscript on one of the covariates in the model to also change 96 times. I've almost solved the problem but I've unfortunately hit a wall. The code so far is, for(i in 1:96){ assign(paste("z.out", i,sep=""), lm(rMonExp_EGM~ TE_i+ Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies)) } This works on the object creation side (e.g. I have z.out1 - z.out96) but I can't seem to get the subscript on the covariate to change as well. I have 96 variables called TE_1, TE_2 ... TE_96 in the dataset. As such, the subscript on TE_, the "i" needs to change to correspond to each of the objects I create. That is, z.out1 should hold the results from this model: z.out1 <- lm(rMonExp_EGM~ TE_1 + Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies) And z.out96 should be: z.out96 <- lm(rMonExp_EGM~ TE_96+ Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies) Hopefully this makes sense. I'm grateful for any tips/advice. cheers, kyle

    Read the article

  • Converting json data to an HTML table

    - by dnaluz
    I have an array of data in php and I need to display this data in a HTML table. Here is what an example data set looks like. Array( Array ( [comparisonFeatureId] => 1182 [comparisonFeatureType] => Category [comparisonValues] => Array ( [0] => Not Available [1] => Standard [2] => Not Available [3] => Not Available ) [featureDescription] => Rear Seat Heat Ducts ),) The dataset is a comparison of 3 items (shown in the comparisonValues index of the array) In the end I need the row to look similar to this <tr class="alt2 section_1"> <td><strong>$record['featureDescription']</strong></td> <td>$record['comparisonValues'][0]</td> <td>$record['comparisonValues'][1]</td> <td>$record['comparisonValues'][2]</td> <td>$record['comparisonValues'][3]</td> </tr> The problem I am coming across is how to best do this. Should I create the entire table HTML on the server side pass it over an ajax call and just dump pre-rendered HTML data into a div or pass the json data and render the table client side. Any elegant suggestions? Thanks in advanced.

    Read the article

  • Choropleth mapping issue in R

    - by chasec
    I am trying to follow the tutorial described here: http://www.thisisthegreenroom.com/2009/choropleths-in-r/ The below code executes, but it is either not matching my dataset with the maps_counties data properly, or it isn't plotting it in the order I would expect. For example, the resulting areas for the greater NYC area show no density while random counties in PA show the highest density. The general format of my data table is: county state count fairfield connecticut 17 hartford connecticut 6 litchfield connecticut 3 new haven connecticut 12 ... ... westchester new york 70 yates new york 1 luzerne pennsylvania 1 Note this data is in order by state and then county and includes data for CT, NJ, NY, & PA. First, I read in my data set: library(maps) library(RColorBrewer) d <- read.table("gissum.txt", sep="\t", header=TRUE) #Concatenate state and county info to match maps library d$stcon <- paste(d$state, d$county, sep=",") #Color bins colors = brewer.pal(5, "PuBu") d$colorBuckets <- as.factor(as.numeric(cut(d$count,c(0,10,20,30,40,50,300)))) Here is my matching mapnames <- map("county",plot=FALSE)[4]$names colorsmatched <- d$colorBuckets [na.omit(match(mapnames ,d$stcon))] Plotting: map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = colors[d$colorBuckets[na.omit(match(mapnames ,d$stcon))]] ,fill = TRUE ,resolution = 0 ,lty = 0 ,lwd= 0.5 ) map("state" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE ,lty=1 ,lwd=2 ) map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE , lty=1 , lwd=.5 ) title(main="Respondent Home ZIP Codes by County") I am sure I am missing something basic re: the order in which the maps function plots items - but I can't seem to figure it out. Thanks for the help. Please let me know if you need any more information.

    Read the article

  • Data confusion - Selecting data in one DataGridView based on selection in another

    - by Logan Young
    This probably isn't the best way to do what I want, but I can't think of anything else to try... NB: I'm using Visual Basic.NET My form has 2 DataGridView controls. One of these is bound to a DataSet, the other isn't visible - at least not until the user selects a uniqueidentifier cell in the 1st grid. When the user makes this selection, the 2nd grid will become visible and display the row from another with the same id as the one selected in the 1st grid. So, basically, I want to dynamically display data in one grid based on user selection in another grid. My code looks like this so far... Private Sub RulesGrid_CellClick(ByVal sender As System.Object, ByVal e As System.Windows.Forms.DataGridViewCellEventArgs) Handles RulesGrid.CellClick Try FlagsGrid.Visible = True ''// MsgBox(RulesGrid.CurrentCell.Value.ToString()) Dim sql As String = "SELECT * FROM ctblMKA_Status_Flags " + _ "WHERE intStatusID = '" & RulesGrid.CurrentCell.Value & "'" DSFlags = GetDS(sql) DSFlags.DataSetName = "FlagsDataSet" FlagsGrid.DataSource = DSFlags ProgressBar.Visible = False Catch ex As Exception MsgBox(ex.ToString) End Try End Sub I feel like I'm missing something here... Any ideas anyone?

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • model.matrix() with na.action=NULL?

    - by Vincent
    I have a formula and a data frame, and I want to extract the model.matrix(). However, I need the resulting matrix to include the NAs that were found in the original dataset. If I were to use model.frame() to do this, I would simply pass it na.action=NULL. However, the output I need is of the model.matrix() format. Specifically, I need only the right-hand side variables, I need the output to be a matrix (not a data frame), and I need factors to be converted to a series of dummy variables. I'm sure I could hack something together using loops or something, but I was wondering if anyone could suggest a cleaner and more efficient workaround. Thanks a lot for your time! And here's an example: dat <- data.frame(matrix(rnorm(20),5,4), gl(5,2)) dat[3,5] <- NA names(dat) <- c(letters[1:4], 'fact') ff <- a ~ b + fact # This omits the row with a missing observation on the factor model.matrix(ff, dat) # This keeps the NA, but it gives me a data frame and does not dichotomize the factor model.frame(ff, dat, na.action=NULL) Here is what I would like to obtain: (Intercept) b fact2 fact3 fact4 fact5 1 1 0.7266086 0 0 0 0 2 1 -0.6088697 0 0 0 0 3 NA 0.4643360 NA NA NA NA 4 1 -1.1666248 1 0 0 0 5 1 -0.7577394 0 1 0 0 6 1 0.7266086 0 1 0 0 7 1 -0.6088697 0 0 1 0 8 1 0.4643360 0 0 1 0 9 1 -1.1666248 0 0 0 1 10 1 -0.7577394 0 0 0 1

    Read the article

  • .NET remoting: System references wrong .NET dll, but how to cure ?

    - by Quandary
    Question: I defined an interface like below. The problem now is, that when I add the dll (API.dll) as reference in an asp.net project, it references a wrong API.dll, though I referenced the correct dll. In turn, it doesn't find GetLDAPlookup, but there is another method that is not in defined here, but in an older version of API.dll... I rebuilt the dll I referenced, so it is definitely the latest version that I added as reference. Do I have to add another GUID, or something ? Imports System.Runtime.InteropServices Namespace RemoteObject ''' <summary> ''' Defines server interface which will be deployed on every client ''' </summary> ''' <GuidAttribute("921DE547-32FA-40BB-961A-EA390B7AE27D")> _ Public Interface IServerMethods ''' <summary> ''' Function to call the server from the client ''' </summary> ''' <param name="strMessage">Some text</param> ''' Sub ServerPrint(ByVal strMessage As String) ''' <summary> ''' Function to call the server from the client ''' </summary> ''' <param name="strMessage">Some text</param> ''' <returns>Some interesting text</returns> ''' Function GetLDAPlookup(ByVal strMessage As String) As System.Data.DataSet End Interface End Namespace

    Read the article

  • .Net SQL Parameter for String List Problem

    - by JK
    I am writing a C# program in which I send a query to SQL Server to be processed and a dataset returns. I am using parameters to pass information to the query before it is sent to SQL server. This works fine except in the situation below. The query looks like this: reportQuery = " Select * From tableName Where Account_Number in (@AccountNum); and Account_Date = @AccountDate "; The AccountDate parameter works find but not the AccountNum parameter. I need the final query to execute like this: Select * From tableName Where Account_Number in ('AX3456','YZYL123','ZZZ123'); and Account_Date = '1-Jan-2010' The problem is that I have these account numbers (actually text) in a C# string list. To feed it to the parameter, I have been declaring the parameter as a string. I turn the list into one string and feed it to the parameter. I think the problem is that I am feeding the paramater this: "'AX3456','YZYL123','ZZZ123'" when it wants this 'AX3456','YZYL123','ZZZ123' How do I get the string list into the query using a parameter and have it execute as shown above? This is how I am declaring and assigning the parameter. SqlParameter AccountNumsParam = new SqlParameter(); AccountNumsParam.ParameterName = "@AccountNums"; AccountNumsParam.SqlDbType = SqlDbType.NVarChar; AccountNumsParam.Value = AccountNumsString; FYI, AccountNumString == "'AX3456','YZYL123','ZZZ123'"

    Read the article

  • How do I subtract two dates in Django/Python?

    - by Ryan
    Hi! I'm working on a little fitness tracker in order to teach myself Django. I want to graph my weight over time, so I've decided to use the Python Google Charts Wrapper. Google charts require that you convert your date into a x coordinate. To do this I want to take the number of days in my dataset by subtracting the first weigh-in from the last weigh-in and then using that to figure out the x coords (for example, I could 100 by the result and increment the x coord by the resulting number for each y coord.) Anyway, I need to figure out how to subtract Django datetime objects from one another and so far, I am striking out on both google and here at the stack. I know PHP, but have never gotten a handle on OO programming, so please excuse my ignorance. Here is what my models look like: class Goal(models.Model): goal_weight = models.DecimalField("Goal Weight",max_digits=4, decimal_places=1) target_date = models.DateTimeField("Target Date to Reach Goal") set_date = models.DateTimeField("When did you set your goal?") comments = models.TextField(blank=True) def __unicode__(self): return unicode(self.goal_weight) class Weight(models.Model): """Weight at a given date and time """ goal = models.ForeignKey(Goal) weight = models.DecimalField("Current Weight",max_digits=4, decimal_places=1) weigh_date = models.DateTimeField("Date of Weigh-In") comments = models.TextField(blank=True) def __unicode__(self): return unicode(self.weight) def recorded_today(self): return self.date.date() == datetime.date.today() Any ideas on how to proceed in the view? Thanks so much!

    Read the article

  • datagridviewcomboboxcolumn with datasource issue?

    - by Sarrrva
    i have some propblem in datagridviewcombobocolumn with custom datasource property in vb.net. when i add datasource it does not populate in datagridview combobox column it giving nothing.. any one please help me out from this problem... code comboboxcell: Public Overrides Sub InitializeEditingControl(ByVal rowIndex As Integer, ByVal initialFormattedValue As Object, ByVal dataGridViewCellStyle As DataGridViewCellStyle) ' Set the value of the editing control to the current cell value. MyBase.InitializeEditingControl(rowIndex, initialFormattedValue, dataGridViewCellStyle) Dim ctl As ComboEditingControl = CType(DataGridView.EditingControl, ComboEditingControl) ctl.DropDownStyle = ComboBoxStyle.DropDown ctl.AutoCompleteSource = AutoCompleteSource.ListItems ctl.AutoCompleteMode = System.Windows.Forms.AutoCompleteMode.Suggest If (Me.DataGridView.Rows(rowIndex).Cells(0).Value <> Nothing) Then Dim GetValueFromRowToUseForBuildingCombo As String = Me.DataGridView.Rows(rowIndex).Cells(0).Value.ToString() ctl.Items.Clear() Dim dt As New DataTable() Try dt = TryCast(DirectCast(Me.DataGridView.Columns(ColumnIndex), ComboColumn).DataSource, DataTable) Catch ex As Exception MsgBox("error") End Try If (dt Is Nothing) Then ctl.Items.Add("") Else Dim thing As DataRow For Each thing In dt.Rows ctl.Items.Add(thing(0).ToString) Next End If If Me.Value Is Nothing Then ctl.SelectedIndex = -1 Else ctl.SelectedItem = Me.Value End If ctl.EditingControlDataGridView = Me.DataGridView End If End Sub from code: Dim widgets As New WidgetDataHandler Dim obj = widgets.GetAllWigetTypes() Dim dt As New DataTable Dim ListofmyObjects As New List(Of widget_types)(obj) Dim objList As New cObjectToTable(Of widget_types)(ListofmyObjects) dt = objList.GetTable() Dim obj1 For Each obj1 In obj blPersons.Add(obj1) Next Dim col1 As New DataGridViewTextBoxColumn col1.DisplayIndex = 0 col1.DataPropertyName = "Id" col1.HeaderText = "Id" dgvi00.Columns.Add(col1) Dim col2 As New ComboColumn col2.DisplayIndex = 1 col2.SortMode = DataGridViewColumnSortMode.Automatic col2.HeaderText = "Name" col2.DataPropertyName = "Name" col2.ToolTipText = "Select something from my combo" Dim dst As New DataSet 'Dim dt1 As New DataTable 'dt1.Columns.Add(col2.HeaderText) 'For Each thing In dt.Rows ' MsgBox(thing(1).ToString) ' dt1.Rows.Add(thing(1).ToString) 'Next dst.Tables.Add(dt) col2.DataSource = dst.Tables(0) col2.DisplayMember = "Name" Me.dgvi00.Columns.AddRange(col2) dgvi00.DataSource = blPersons.BindingSource 'setup the bindings for the binding navigator Dim bn As New _365_Media_Library.BindingNavigatorWithFilter bn.Dock = DockStyle.Bottom bn.GripStyle = ToolStripGripStyle.Hidden Me.Controls.Add(bn) bn.BindingSource = blPersons.BindingSource note : its working good in standalone application regards and thanks sarva

    Read the article

  • Reporting Services Sum of Inner Group in Outer Group

    - by Spoonybard
    I have a report in Reporting Services 2008 using ASP.net 3.5 and SQL Server 2008. The report has 2 groupings and a detail row. This is the current format: Outer Group Inner Group Detail Row The Detail Row represents an item on a receipt and a receipt can have multiple items. Each receipt was paid with a certain payment method. So the Outer Group is grouped by payment type, the Inner Group is grouped by the receipt's ID, and the Detail Row is each item for the given receipt. My raw data result set has two important columns: The Amount Received and the Amount Applied. The Amount Received is how much money in total was collected for all the items on the receipt. The Amount Applied is how much money each item got from the total Amount Received. Sample Result Set: ReceiptID Item ItemID AmountReceived AmountApplied Payment Method ------------------------------------------------------------------------------------------ 1 Book 1 $200.00 $40.00 Cash 1 CD 2 $200.00 $20.00 Cash 1 Software 3 $200.00 $100.00 Cash 1 Backpack 4 $200.00 $40.00 Cash The Inner Group displays the AmountReceived correctly as $200. However, the Outer Group displays the AmountReceived as $800, because I believe that it is going off each detail row which in this case is a count of 4 items. What I want is to see in the Outer Group that the Amount Received is $200. I tried restricting the scope in my SUM function to be the Inner Group, but I get the error "The scope parameter must be set to a string constant that is equal to either the name of a containing group, the name of a containing data region, or the name of a dataset." Does anyone have any suggestions on how to solve this issue? Thanks.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >