Search Results

Search found 1402 results on 57 pages for 'dataset'.

Page 51/57 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Can I get the amount of time for which a key is pressed on a keyboard

    - by Adi
    Dear all, I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.

    Read the article

  • Calculating a consecutive streak in data

    - by Jura25
    I’m trying to calculate the maximum winning and losing streak in a dataset (i.e. the highest number of consecutive positive or negative values). I’ve found a somewhat related question here on StackOverflow and even though that gave me some good suggestions, the angle of that question is different, and I’m not (yet) experienced enough to translate and apply that information to this problem. So I was hoping you could help me out, even an suggestion would be great. My data set look like this: > subRes Instrument TradeResult.Currency. 1 JPM -3 2 JPM 264 3 JPM 284 4 JPM 69 5 JPM 283 6 JPM -219 7 JPM -91 8 JPM 165 9 JPM -35 10 JPM -294 11 KFT -8 12 KFT -48 13 KFT 125 14 KFT -150 15 KFT -206 16 KFT 107 17 KFT 107 18 KFT 56 19 KFT -26 20 KFT 189 > split(subRes[,2],subRes[,1]) $JPM [1] -3 264 284 69 283 -219 -91 165 -35 -294 $KFT [1] -8 -48 125 -150 -206 107 107 56 -26 189 In this case, the maximum (winning) streak for JPM is four (namely the 264, 284, 69 and 283 consecutive positive results) and for KFT this value is 3 (107, 107, 56). My goal is to create a function which gives the maximum winning streaks per instrument (i.e. JPM: 4, KFT: 3). To achieve that: R needs to compare the current result with the previous result, and if it is higher then there is a streak of at least 2 consecutive positive results. Then R needs to look at the next value, and if this is also higher: add 1 to the already found value of 2. If this value isn’t higher, R needs to move on to the next value, while remembering 2 as the intermediate maximum. I’ve tried cumsum and cummax in accordance with conditional summing (like cumsum(c(TRUE, diff(subRes[,2]) > 0))), which didn’t work out. Also rle in accordance with lapply (like lapply(rle(subRes$TradeResult.Currency.), function(x) diff(x) > 0)) didn’t work. How can I make this work?

    Read the article

  • SSRS code variable resetting on new page

    - by edmicman
    In SSRS 2008 I am trying to maintain a SUM of SUMs on a group using custom Code. The reason is that I have a table of data, grouped and returning SUMs of the data. I have a filter on the group to remove lines where group sums are zero. Everything works except I'm running into problems with the group totals - it should be summing the visible group totals but is instead summing the entire dataset. There's tons of articles about how to work around this, usually using custom code. I've made custom functions and variables to maintain a counter: Public Dim GroupMedTotal as Integer Public Dim GrandMedTotal as Integer Public Function CalcMedTotal(ThisValue as Integer) as Integer GroupMedTotal = GroupMedTotal + ThisValue GrandMedTotal = GrandMedTotal + ThisValue Return ThisValue End Function Public Function ReturnMedSubtotal() as Integer Dim ThisValue as Integer = GroupMedTotal GroupMedTotal = 0 Return ThisValue End Function Basically CalcMedTotal is fed a SUM of a group, and maintains a running total of that sum. Then in the group total line I output ReturnMedSubtotal which is supposed to give me the accumulated total and reset it for the next group. This actually works great, EXCEPT - it is resetting the GroupMedTotal value on each page break. I don't have page breaks explicitly set, it's just the natural break in the SSRS viewer. And if I export the results to Excel everything works and looks correctly. If I output Code.GroupMedTotal on each group row, I see it count correctly, and then if a group spans multiple pages on the next page GroupMedTotal is reset and begins counting from zero again. Any help in what's going on or how to work around this? Thanks!

    Read the article

  • I need help on this data file to be edited in SOM_PAK format

    - by Mola
    Hi Experts, I am working on Self Organizing Map (SOM) Implementation and i have a microarray dataset which i am trying to read in using some_read_data function, but i keep having an errors when i edit it to have it in SOM_PAK form which is recognise by SOM for reading such as ??? Error using == somtoolbox\som_read_data.m Only 69 vector components on input file data line 1 (dimension is 70) Error in == SomMainFunction at 3 sD = som_read_data('B_r2.txt'); but when i try to read the data without editing which is the original file as shown here: http://rapidshare.com/files/376239367/DLBCL.txt.html It indicates "Data read OK", but i have the following error ??? Error using == unknown Out of memory. Type HELP MEMORY for your options. Error in == somtoolbox\som_bmus.m at 189 Bmus = zeros(dlen,length(which_bmus)); Error in == somvis\somvis_p_matrix.m at 41 [dummy dists] = som_bmus (dat, dat, 2:datlen); Error in == SomMainFunction at 16 [pheight rad_real perc] = somvis_p_matrix(sM,sD); You can get the datafile from here:http://rapidshare.com/files/376239367/DLBCL.txt.html I need someone to help me correct this data for me and put it in SOM_PAK format. I have tried getting it in SOM_PAK format, but it still giving me errors:

    Read the article

  • Matching innermost braces with regex or strpos?

    - by rich97
    I have a sort of mini parsing syntax I made up to help me streamline my view code in cakephp. Basically I have created a table helper which, when given a dataset and (optionally) a set of options for how to format the data will render out a table, as opposed to me looping though the data and editing it manually. It allows the user to be as complex or as simple as they like, it can get pretty powerful. However, In order to achieve this I had to make a simple parsing syntax. As a quick example the user would do something like so: $this->Table->data = $userData; $this->Table->elements['td']['data'] = array( '{:User.username:}', '{:User.created:}' => array('Time::nice') ); echo $this->Table->render(); And when rendering the table would then generate: <table> <tbody> <tr><td>rich97</td><td>Sun 21st 02:30pm</td></tr> </tbody> </table> The problem occurs then I try to nest the braces like so: {:User.levels.iconClasses.{:User.access:}:} Is there anyway I can only get the inner most brackets on the first time round and loop though until there are no matches? Or even do it in one go? Or even better use strpos? Here is my regex as it stands: '/\{\:([^}]+)\:\}/'

    Read the article

  • MongoDB complex MapReduce of video logs

    - by Justin Hourigan
    I have a dataset from video streaming logs. Each video is identified by a FileGUID. The log entries record the FileGUID, the fragment of the video watched and the bandwidth it was watched at. I would like to create a mapreduce outputting, for each video, a count for fragments both total and for each bandwidth. Ideally it would look like; {"FileGUID":"50acb3a5796634df0e073285", { "1":{"total":76, "0832":34, "1028":42}, "2":{"total":42, "0832":28, "1028":14}, ... } } Is this possible with one mapreduce or is it a multi-step process, or should I use a different method? Here is a sample of the data. { "_id": ObjectId("50acb3a5796634df0e073285"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:57.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(237), "Status": NumberInt(200), "Size": NumberInt(576790), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073284"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:52.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(236), "Status": NumberInt(200), "Size": NumberInt(577100), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073283"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:47.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(234), "Status": NumberInt(200), "Size": NumberInt(576664), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073282"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:42.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(233), "Status": NumberInt(200), "Size": NumberInt(575692), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" }

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • How to convert Big Endian and how to flip the highest bit?

    - by Robert Frank
    I am using a TStream to read binary data (thanks to this post: http://stackoverflow.com/questions/2878180/how-to-use-a-tfilestream-to-read-2d-matrices-into-dynamic-array). My next problem is that the data is Big Endian. From my reading, the Swap() method is seemingly deprecated. How would I swap the types below? 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point - Are IEEE affected by Big Endian? And, finally, since the data is unsigned, the creators of this dataset have stored the unsigned values as signed integers (excluding the IEEE). They instruct that one need only add an offset (2^15, 2^31, and 2^63) to recover the unsigned data. But, they note that flipping the most significant bit is the fastest way to do that. How does one efficiently flip the most significant bit of a 16, 32, or 64-bit integer? So, if the data on disk (16-bit) is "85 FB" - the desired result after reading the data and swapping and bit flipping would be 1531. Is there a way to accomplish the swapping and bit flipping with generics so it fits into the generic answer at the link above? Yes, kids, THIS is how scientific astronomical data is stored by NASA, ESO, and all professional astronomers. This FITS standard is considered by some to be one of the most successful standards ever created in its proliferation and flexibility!

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Displaying a collection of objects in a .Net grid on a smartphone without data binding.

    - by Xav
    I know there's No DataGridView in the CF, but I've got a collection of in-memory objects that I want to display in a grid on a phone. Options I have thought of: Stick all the objects into a SQL-CE database and use a bound datagrid. This'll mean pulling my classes apart and separating the data from the functionality, which may or may not be a bad thing, but seems a little overkill. Write my own dataset and binding code so that I can bind my collection of objects to a bound datagrid. No idea how practical or possible this is, but seems like it's either do-able or impossible and I'm hoping someone here knows which! Find a third-party unbound grid control. The only one I've seen mentioned is OpenNetCF, which I'm downloading as I type. Are there others? Are any of them any good? Do something very nasty with dynamically loading labels and textboxes into a scrolling region on the form. REALLY don't want to go there. I'm not much experienced with data-bound controls other than occasionally making use of the very vanilla functionality in WinForms or ASP.Net, and that quite a long time ago, so if any of the above are silly, please be gentle. Thanks Xav

    Read the article

  • Elegant code question: How to avoid creating unneeded object

    - by SeaDrive
    The root of my question is that the C# compiler is too smart. It detects a path via which an object could be undefined, so demands that I fill it. In the code, I look at the tables in a DataSet to see if there is one that I want. If not, I create a new one. I know that dtOut will always be assigned a value, but the the compiler is not happy unless it's assigned a value when declared. This is inelegant. How do I rewrite this in a more elegant way? System.Data.DataTable dtOut = new System.Data.DataTable(); . . // find table with tablename = grp // if none, create new table bool bTableFound = false; foreach (System.Data.DataTable d1 in dsOut.Tables) { string d1_name = d1.TableName; if (d1_name.Equals(grp)) { dtOut = d1; bTableFound = true; break; } } if (!bTableFound) dtOut = RptTable(grp);

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • When should I implement globalization and localization in C#?

    - by Geo Ego
    I am cleaning up some code in a C# app that I wrote and really trying to focus on best practices and coding style. As such, I am running my assembly through FXCop and trying to research each message it gives me to decide what should and shouldn't be changed. What I am currently focusing on are locale settings. For instance, the two errors that I have currently are that I should be specifying the IFormatProvider parameter for Convert.ToString(int), and setting the Dataset and Datatable locale. This is something that I've never done, and never put much thought into. I've always just left that overload out. The current app that I am working on is an internal app for a small company that will very likely never need to run in another country. As such, it is my opinion that I do not need to set these at all. On the other hand, doing so would not be such a big deal, but it seems like it is unneccessary and could hinder readability to a degree. I understand that Microsoft's contention is to use it if it's there, period. Well, I'm technically supposed to call Dispose() on every object that implements IDisposable, but I don't bother doing that with Datasets and Datatables, so I wonder what the practice is "in the wild."

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • How to programmatically set the text of a cell of database in VB.Net?

    - by manuel
    I have a Microsoft Access database file connected to VB.NET. In the database table, I have a 'No.' and a 'Status' column. Then I have a textbox where I can input an integer. I also have a button, which will change the data cell in 'Status' where 'No.' = txtTitleNo.Text(), when I click it. Let's say I want the 'Status' cell to be changed to "Reserved". What codes should I put in the BtnReserve_Click? Here is the code: Public Class theForm Dim con As New OleDb.OleDbConnection Dim ds As New DataSet Dim daTitles As OleDb.OleDbDataAdapter Private Sub theForm_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load con.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=|DataDirectory|\db1.mdb" con.Open() daTitles = New OleDb.OleDbDataAdapter("SELECT * FROM Titles", con) daTitles.Fill(ds, "Titles") DataGridView1.DataSource = ds.Tables("Titles") DataGridView1.AutoResizeColumns() End Sub Private Sub BtnReserve_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles BtnReserve.Click Dim daReserve As OleDb.OleDbCommand For i As Integer = 1 TReservedo DataGridView1.RowCount() If i = Val(txtTitleNo.Text()) Then daReserve = New OleDb.OleDbCommand("UPDATE Titles SET Status = 'Reserved' WHERE No = '" & i & "'", con) daReserve.ExecuteNonQuery() End If Next Dim cb As New OleDb.OleDbCommandBuilder(daTitles) daTitles.Update(ds, "Titles") End Sub This code still does not change the 'Status'. What should I do?

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • NHibernate / multiple sessions and nested objects

    - by bernhardrusch
    We are using NHibernate in a rich client application. It is a pretty open application (the user searches for a dataset or creates a new one, changes the data and saves the data set. We leave the session open, because sometimes we have to lazy load some properties of the object (nested object structure). This means one big problem if we leave the session open, the db (MySQL) closes the connection and we are not able to find this out and it throws an exception (socket communication error) when accessing the database (we are thinking about testing the db connection before accessing the object - but this is not really optimal neither, the other option would be to set back the timeout of the db connection , but this just doesn't seem to well). So - is it possible to reconnect the session to a new database connection ? Another problem is it possible to get an object from one session and then re-attach it to another session ? (I often hear that session.lock should work for this - but this doesn't work so well in our application - so I ended up getting a "fresh" object from the session and copy the data over manually - which is a little bit cumbersome) Any ideas for this ?

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • Choropleth mapping issue in R

    - by chasec
    I am trying to follow the tutorial described here: http://www.thisisthegreenroom.com/2009/choropleths-in-r/ The below code executes, but it is either not matching my dataset with the maps_counties data properly, or it isn't plotting it in the order I would expect. For example, the resulting areas for the greater NYC area show no density while random counties in PA show the highest density. The general format of my data table is: county state count fairfield connecticut 17 hartford connecticut 6 litchfield connecticut 3 new haven connecticut 12 ... ... westchester new york 70 yates new york 1 luzerne pennsylvania 1 Note this data is in order by state and then county and includes data for CT, NJ, NY, & PA. First, I read in my data set: library(maps) library(RColorBrewer) d <- read.table("gissum.txt", sep="\t", header=TRUE) #Concatenate state and county info to match maps library d$stcon <- paste(d$state, d$county, sep=",") #Color bins colors = brewer.pal(5, "PuBu") d$colorBuckets <- as.factor(as.numeric(cut(d$count,c(0,10,20,30,40,50,300)))) Here is my matching mapnames <- map("county",plot=FALSE)[4]$names colorsmatched <- d$colorBuckets [na.omit(match(mapnames ,d$stcon))] Plotting: map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = colors[d$colorBuckets[na.omit(match(mapnames ,d$stcon))]] ,fill = TRUE ,resolution = 0 ,lty = 0 ,lwd= 0.5 ) map("state" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE ,lty=1 ,lwd=2 ) map("county" ,c("new york","new jersey", "connecticut", "pennsylvania") ,col = "black" ,fill=FALSE ,add=TRUE , lty=1 , lwd=.5 ) title(main="Respondent Home ZIP Codes by County") I am sure I am missing something basic re: the order in which the maps function plots items - but I can't seem to figure it out. Thanks for the help. Please let me know if you need any more information.

    Read the article

  • Why is a DataViewManager not sorting?

    - by Alarion
    First off, a quick rundown of what the code is doing. I have two tables: Companies and Clients. There is a one to many relationship between Companies and Clients (one company can have many clients). In some processing logic, I have both tables loaded into a DataSet - each are DataTables. I have added a DataRelation to set the relationship, and it works when I use GetChildRows on a record from the Companies table. However, I need to sort the records returned. After googling around some it looks like the DataViewManager is the way to go, and I have examined several basic examples. However, I cannot get my rows sorted. Am I missing something, or am I not using this how it is supposed to be used? Example code: Dim ldvmManager As New DataViewManager(mdsData) ldvmManager.DataViewSettings("Companies").Sort = "comName ASC" ldvmManager.DataViewSettings("Clients").ApplyDefaultSort = False ldvmManager.DataViewSettings("Clients").Sort = "entLastName ASC, entFirstName ASC" For Each mdrCurrentCompany in ldvmManager.DataViewSettings("Companies").Table.Rows Dim ldrClients() as DataRow = mdrCurrentCompany.GetChildRows("CompaniesClients") Next No matter what I do, the first Client returned has a last name that starts with a 'B' and the second has one that starts with a 'A'. From there, the order is all mixed up. If I use my entire data instead of the subset I was using to test with, the first Client returned has a last name that starts with 'J'. It seems like it is still using whatever default sort was being used before I tried to use the DataViewManager. Any ideas?

    Read the article

  • Converting json data to an HTML table

    - by dnaluz
    I have an array of data in php and I need to display this data in a HTML table. Here is what an example data set looks like. Array( Array ( [comparisonFeatureId] => 1182 [comparisonFeatureType] => Category [comparisonValues] => Array ( [0] => Not Available [1] => Standard [2] => Not Available [3] => Not Available ) [featureDescription] => Rear Seat Heat Ducts ),) The dataset is a comparison of 3 items (shown in the comparisonValues index of the array) In the end I need the row to look similar to this <tr class="alt2 section_1"> <td><strong>$record['featureDescription']</strong></td> <td>$record['comparisonValues'][0]</td> <td>$record['comparisonValues'][1]</td> <td>$record['comparisonValues'][2]</td> <td>$record['comparisonValues'][3]</td> </tr> The problem I am coming across is how to best do this. Should I create the entire table HTML on the server side pass it over an ajax call and just dump pre-rendered HTML data into a div or pass the json data and render the table client side. Any elegant suggestions? Thanks in advanced.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57  | Next Page >