Search Results

Search found 115 results on 5 pages for 'variance'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Entity Framework with MySQL - Timeout Expired while Generating Model

    - by Nathan Taylor
    I've constructed a database in MySQL and I am attempting to map it out with Entity Framework, but I start running into "GenerateSSDLException"s whenever I try to add more than about 20 tables to the EF context. An exception of type 'Microsoft.Data.Entity.Design.VisualStudio.ModelWizard.Engine.ModelBuilderEngine+GenerateSSDLException' occurred while attempting to update from the database. The exception message is: 'An error occurred while executing the command definition. See the inner exception for details.' Fatal error encountered during command execution. Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. There's nothing special about the affected tables, and it's never the same table(s), it's just that after a certain (unspecific) number of tables have been added, the context can no longer be updated without the "Timeout expired" error. Sometimes it's only one table left over, and sometimes it's three; results are pretty unpredictable. Furthermore, the variance in the number of tables which can be added before the error indicates to me that perhaps the problem lies in the size of the query being generated to update the context which includes both the existing table definitions, and also the new tables that are being added to it. Essentially, the SQL query is getting too large and it's failing to execute for some reason. If I generate the model with EdmGen2 it works without any errors, but the generated EDMX file cannot be updated within Visual Studio without producing the aforementioned exception. In all likelihood the source of this problem lies in the tool within Visual Studio given that EdmGen2 works fine, but I'm hoping that perhaps others could offer some advice on how to approach this very unique issue, because it seems like I'm not the only person experiencing it. One suggestion a colleague offered was maintaining two separate EBMX files with some table crossover, but that seems like a pretty ugly fix in my opinion. I suppose this is what I get for trying to use "new technology". :(

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • Positioning problem in VIsta when Aero Theme is enabled

    - by Adeel Rehman
    Hi, I have a Windows form which has tab pages. On a Tab Page, I have a Combo Box. When I click the combobox a drop down opens up just below the combo box to give the impression of a drop down. The drop down is a windows form and this is how I set its position Popup.Location = control.PointToScreen(parentcontrol.PointToClient(new System.Drawing.Point(0, control.Height))); Popup = drop down that opens below the combo box (Windows Form) control = my combo box (Combo Box) parentcontrol = the windows form on which the control is present (Parent Form) Problem:- The X coordinate is not plotted correctly on the screen with Aero theme enabled. This works perfectly fine on XP (with some variance in y due to tablelayout panel i assume). But when i use the same code on Vista with Aero theme enabled the x-cordinate wanders away about 20-30 pixels. If I Turn Aero theme off on Vista, it works fine. I have found the X and Y coordiantes calculated in both cases are the same. But the way Vista draws these coordinates on the screen (when aero theme is enabled) is different. Is there any solution to this problem? Thanks, Adeel Rehman.

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • R: How to remove outliers from a smoother in ggplot2?

    - by John
    I have the following data set that I am trying to plot with ggplot2, it is a time series of three experiments A1, B1 and C1 and each experiment had three replicates. I am trying to add a stat which detects and removes outliers before returning a smoother (mean and variance?). I have written my own outlier function (not shown) but I expect there is already a function to do this, I just have not found it. I've looked at stat_sum_df("median_hilow", geom = "smooth") from some examples in the ggplot2 book, but I didn't understand the help doc from Hmisc to see if it removes outliers or not. Is there a function to remove outliers like this in ggplot, or where would I amend my code below to add my own function? library (ggplot2) data = data.frame (day = c(1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7), od = c( 0.1,1.0,0.5,0.7 ,0.13,0.33,0.54,0.76 ,0.1,0.35,0.54,0.73 ,1.3,1.5,1.75,1.7 ,1.3,1.3,1.0,1.6 ,1.7,1.6,1.75,1.7 ,2.1,2.3,2.5,2.7 ,2.5,2.6,2.6,2.8 ,2.3,2.5,2.8,3.8), series_id = c( "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1"), replicate = c( "A1.1","A1.1","A1.1","A1.1", "A1.2","A1.2","A1.2","A1.2", "A1.3","A1.3","A1.3","A1.3", "B1.1","B1.1","B1.1","B1.1", "B1.2","B1.2","B1.2","B1.2", "B1.3","B1.3","B1.3","B1.3", "C1.1","C1.1","C1.1","C1.1", "C1.2","C1.2","C1.2","C1.2", "C1.3","C1.3","C1.3","C1.3")) > data day od series_id replicate 1 1 0.10 A1 A1.1 2 3 1.00 A1 A1.1 3 5 0.50 A1 A1.1 4 7 0.70 A1 A1.1 5 1 0.13 A1 A1.2 6 3 0.33 A1 A1.2 7 5 0.54 A1 A1.2 8 7 0.76 A1 A1.2 9 1 0.10 A1 A1.3 10 3 0.35 A1 A1.3 11 5 0.54 A1 A1.3 12 7 0.73 A1 A1.3 13 1 1.30 B1 B1.1 This is what I have so far and is working nicely, but outliers are not removed: r <- ggplot(data = data, aes(x = day, y = od)) r + geom_point(aes(group = replicate, color = series_id)) + # add points geom_line(aes(group = replicate, color = series_id)) + # add lines geom_smooth(aes(group = series_id)) # add smoother, average of each replicate

    Read the article

  • Generating lognormally distributed random number from mean, coeff of variation

    - by Richie Cotton
    Most functions for generating lognormally distributed random numbers take the mean and standard deviation of the associated normal distribution as parameters. My problem is that I only know the mean and the coefficient of variation of the lognormal distribution. It is reasonably straight forward to derive the parameters I need for the standard functions from what I have: If mu and sigma are the mean and standard deviation of the associated normal distribution, we know that coeffOfVar^2 = variance / mean^2 = (exp(sigma^2) - 1) * exp(2*mu + sigma^2) / exp(mu + sigma^2/2)^2 = exp(sigma^2) - 1 We can rearrange this to sigma = sqrt(log(coeffOfVar^2 + 1)) We also know that mean = exp(mu + sigma^2/2) This rearranges to mu = log(mean) - sigma^2/2 Here's my R implementation rlnorm0 <- function(mean, coeffOfVar, n = 1e6) { sigma <- sqrt(log(coeffOfVar^2 + 1)) mu <- log(mean) - sigma^2 / 2 rlnorm(n, mu, sigma) } It works okay for small coefficients of variation r1 <- rlnorm0(2, 0.5) mean(r1) # 2.000095 sd(r1) / mean(r1) # 0.4998437 But not for larger values r2 <- rlnorm0(2, 50) mean(r2) # 2.048509 sd(r2) / mean(r2) # 68.55871 To check that it wasn't an R-specific issue, I reimplemented it in MATLAB. (Uses stats toolbox.) function y = lognrnd0(mean, coeffOfVar, sizeOut) if nargin < 3 || isempty(sizeOut) sizeOut = [1e6 1]; end sigma = sqrt(log(coeffOfVar.^2 + 1)); mu = log(mean) - sigma.^2 ./ 2; y = lognrnd(mu, sigma, sizeOut); end r1 = lognrnd0(2, 0.5); mean(r1) % 2.0013 std(r1) ./ mean(r1) % 0.5008 r2 = lognrnd0(2, 50); mean(r2) % 1.9611 std(r2) ./ mean(r2) % 22.61 Same problem. The question is, why is this happening? Is it just that the standard deviation is not robust when the variation is that wide? Or have a screwed up somewhere?

    Read the article

  • accumulator don't compile

    - by Abruzzo Forte e Gentile
    HI All I am using boost accumulators. These 2 lines use to work fine with current version of boost in LInux. accumulator_set< double, stats< tag::covariance<double, tag::covariate1> > > acc_cov; accumulator_set< double, stats< tag::variance > > acc_var; When I moved to a Sun machine where it is installed boost v1.40 I have this building error "/opt/boost/boost/accumulators/framework/depends_on.hpp", line 276: Error:<no tag> cannot be initialized in a constructor. "/opt/boost/boost/fusion/container/list/cons.hpp", line 85: Where: While instantiating "boost::accumulators::detail::accumulator_wrapper<int, int>::accumulator_wrapper(const boost::accumulators::detail::accumulator_wrapper<int, int>&)". "/opt/boost/boost/fusion/container/list/cons.hpp", line 85: Where: Instantiated from non-template code. 1 Error(s) Do you know how can I fix those errors and why I have this issue? Thanks AFG

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Robust DateTime parser library for .NET

    - by Frank Krueger
    Hello, I am writing an RSS and Mail reader app in C# (technically MonoTouch). I have run into the issue of parsing DateTimes. I see a lot of variance in how dates are presented in the wild and have begun writing a function like this: public static DateTime ParseTime(string timeStr) { var formats = new string[] { "ddd, d MMM yyyy H:mm:ss \"GMT+00:00\"", "d MMM yyyy H:mm:ss \"EST\"", "yyyy-MM-dd\"T\"HH:mm:ss\"Z\"", "ddd MMM d HH:mm:ss \"+0000\" yyyy", }; try { return DateTime.Parse(timeStr); } catch (Exception) { } foreach (var f in formats) { try { var t = DateTime.ParseExact(timeStr, f, CultureInfo.InvariantCulture); return t; } catch (Exception) { } } return DateTime.MinValue; } This, well, makes me sick. Three points. (1) It's silly of me to think that I can actually collect a format list that will cover everything out there. (2) It's wrong! Notice that I'm treating an EST date time as UTC (since .NET seems oblivious to time zones). (3) I don't like using exceptions for logic. I am looking for an existing library (source only please) that is known to handle a bunch of these formats. Also, I would like to keep using UTC DateTimes throughout my code so whatever library is suggested should be able to produce DateTimes. Is there anything out there like this?

    Read the article

  • Weird stuttering issues not related to GC.

    - by Smills
    I am getting some odd stuttering issues with my game even though my FPS never seems to drop below 30. About every 5 seconds my game stutters. I was originally getting stuttering every 1-2 seconds due to my garbage collection issues, but I have sorted those and will often go 15-20 seconds without a garbage collection. Despite this, my game still stutters periodically even when there is no GC listed in logcat anywhere near the stutter. Even when I take out most of my code and simply make my "physics" code the below code I get this weird slowdown issue. I feel that I am missing something or overlooking something. Shouldn't that "elapsed" code that I put in stop any variance in the speed of the main character related to changes in FPS? Any input/theories would be awesome. Physics: private void updatePhysics() { //get current time long now = System.currentTimeMillis(); //added this to see if I could speed it up, it made no difference Thread myThread = Thread.currentThread(); myThread.setPriority(Thread.MAX_PRIORITY); //work out elapsed time since last frame in seconds double elapsed = (now - mLastTime2) / 1000.0; mLastTime2 = now; //measures FPS and displays in logcat once every 30 frames fps+=1/elapsed; fpscount+=1; if (fpscount==30) { fps=fps/fpscount; Log.i("myActivity","FPS: "+fps+" Touch: "+touch); fpscount=0; } //this should make the main character (theoretically) move upwards at a steady pace mY-=100*elapsed; //increase amount I translate the draw to = main characters Y //location if the main character goes upwards if (mY<=viewY) { viewY=mY; } }

    Read the article

  • How do I get 2-way data binding to work for nested asp.net Repeater controls

    - by jimblanchard
    I have the following (trimmed) markup: <asp:Repeater ID="CostCategoryRepeater" runat="server"> <ItemTemplate> <div class="costCategory"> <asp:Repeater ID="CostRepeater" runat="server" DataSource='<%# Eval("Costs")%>'> <ItemTemplate> <tr class="oddCostRows"> <td class="costItemTextRight"><span><%# Eval("Variance", "{0:c0}")%></span></td> <td class="costItemTextRight"><input id="SupplementAmount" class="costEntryRight" type="text" value='<%# Bind("SupplementAmount")%>' runat="server" /></td> </tr> </ItemTemplate> </asp:Repeater> </div> </ItemTemplate> </asp:Repeater> The outer repeater's DataSource is set in the code-beside. I've snipped them, but there are Eval statements that wire up to the properties in the outer Repeater. Anyway, one of the fields in the inner Repeater needs to be a Bind instead of an Eval, as I want to get the values that the user types in. The SupplementAmount input element correctly receives it's value when the page loads, but on the other side, when I inspect the contents of the Costs List when the form posts back, the changes the user made aren't present. Thanks.

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • Fill lower matrix with vector by row, not column

    - by mhermans
    I am trying to read in a variance-covariance matrix written out by LISREL in the following format in a plain text, whitespace separated file: 0.23675E+01 0.86752E+00 0.28675E+01 -0.36190E+00 -0.36190E+00 0.25381E+01 -0.32571E+00 -0.32571E+00 0.84425E+00 0.25598E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.21120E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.91200E+00 0.21120E+01 This is actually a lower diagonal matrix (including diagonal): 0.23675E+01 0.86752E+00 0.28675E+01 -0.36190E+00 -0.36190E+00 0.25381E+01 -0.32571E+00 -0.32571E+00 0.84425E+00 0.25598E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.21120E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.91200E+00 0.21120E+01 I can read in the values correctly with scan() or read.table(fill=T). I am however not able to correctly store the read-in vector in a matrix. The following code S <- diag(6) S[lower.tri(S,diag=T)] <- d fills the lower matrix by column, while it should fill it by row. Using matrix() does allow for the option byrow=TRUE, but this will fill in the whole matrix, not just the lower half (with diagonal). Is it possible to have both: only fill the lower matrix (with diagonal) and do it by row? (separate issue I'm having: LISREL uses 'D+01' while R only recognises 'E+01' for scientific notation. Can you change this in R to accept also 'D'?)

    Read the article

  • Troubleshooting Network Speeds -- The Age Old Inquiry

    - by John K
    I'm looking for help with what I'm sure is an age old question. I've found myself in a situation of yearning to understand network throughput more clearly, but I can't seem to find information that makes it "click" We have a few servers distributed geographically, running various versions of Windows. Assuming we always use one host (a desktop) as the source, when copying data from that host to other servers across the country, we see a high variance in speed. In some cases, we can copy data at 12MB/s consistently, in others, we're seeing 0.8 MB/s. It should be noted, after testing 8 destinations, we always seem to be at either 0.6-0.8MB/s or 11-12 MB/s. In the building we're primarily concerned with, we have an OC-3 connection to our ISP. I know there are a lot of variables at play, but I guess I was hoping the experts here could help answer a few basic questions to help bolster my understanding. 1.) For older machines, running Windows XP, server 2003, etc, with a 100Mbps Ethernet card and 72 ms typical latency, does 0.8 MB/s sound at all reasonable? Or do you think that slow enough to indicate a problem? 2.) The classic "mathematical fastest speed" of "throughput = TCP window / latency," is, in our case, calculated to 0.8 MB/s (64Kb / 72 ms). My understanding is that is an upper bounds; that you would never expect to reach (due to overhead) let alone surpass that speed. In some cases though, we're seeing speeds of 12.3 MB/s. There are Steelhead accelerators scattered around the network, could those account for such a higher transfer rate? 3.) It's been suggested that the use SMB vs. SMB2 could explain the differences in speed. Indeed, as expected, packet captures show both being used depending on the OS versions in play, as we would expect. I understand what determines SMB2 being used or not, but I'm curious to know what kind of performance gain you can expect with SMB2. My problem simply seems to be a lack of experience, and more importantly, perspective, in terms of what are and are not reasonable network speeds. Could anyone help impart come context/perspective?

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • Easing the Journey to the Private Cloud with Oracle Consulting

    - by MichaelM-Oracle
    By Sanjai Marimadaiah, Senior Director, Strategy & Business Development – Cloud Solutions, Oracle Consulting Services Business leaders are now leading the charge on how their firms can profit from cloud solutions. Agility and innovation are becoming the primary drivers of the business case for the cloud, even more than the anticipated cost savings. Leaders need to find the right strategy and optimize the use of cloud-based applications across their enterprise-computing infrastructure. The Problem – Current State With prevalent IT practices, many organizations find that they run multiple IT solutions serving similar business needs. This has led to the proliferation of technology stacks, for example: Oracle 10g on Sun T4 running Solaris 9; Oracle 11g on Exadata running Linux; or Oracle 12c on commodity x86 servers. This variance has a huge impact on an organization’s agility and expenses, and requires IT professionals with varied skills as well as on-going training for different systems and tools. Fortunately there is a practical business strategy to overcome this unneeded redundancy. Thus begins a journey to the right cloud computing solution. The Solution – Cloud Services from Oracle Consulting Services (OCS) Oracle Consulting Services (OCS ) works closely with our clients as trusted advisors to proactively respond to business needs and IT concerns. OCS understands that making the transition to cloud solutions begins with a strategic conversation, based on its deep expertise for successfully completing private cloud service engagements with several companies. For a journey to the cloud, Oracle Consulting Services leads the client through four phases– standardization, consolidation, service delivery, and enterprise cloud – to achieve optimal returns. Phase 1 - Standardization Oracle Consulting Services (OCS) works with clients to evaluate their business requirements and propose a set of standard solutions stacks for various IT solutions. This is an opportune time to evaluate cloud ready solutions, such as Oracle 12c, Oracle Exadata, and the Oracle Database Appliance (ODA). The OCS consultants, together with the delivery team, then turn to upgrading and migrating existing solution stacks to standardized offerings. OCS has the expertise and tools to complete this stage in a fraction of the time required by other IT services companies. Clients quickly realize cost savings in tools, processes, and type/number of resources required. This standardization also improves agility of the IT organizations and their abilities to respond to the needs of various business units. Phase 2 - Consolidation During the consolidation phase, OCS consultants programmatically consolidate hundreds of databases into a smaller number of servers to improve utilization, reduce floor space, and optimize maintenance costs. Consolidation helps clients realize huge savings in CapEx investments and shrink OpEx costs. The use of engineered systems, such as Oracle Exadata, greatly reduces the client’s risk of moving to a new solution stack. OCS recommends clients to pursue Phase 1 (Standardization) and Phase 2 (Consolidation) simultaneously to reduce the overall time, effort, and expense of the cloud journey. Phase 3 - Service Delivery Once a client is on a path of standardization and consolidation, OCS consultants create Service Catalogues based on the SLAs requirements and the criticality of the solutions. The number and types of Service Catalogues (Platinum, Gold, Silver, Bronze, etc.) vary from client to client. OCS consultants also implement a variety of value-added cloud solutions, including monitoring, metering, and charge-back solutions. At this stage, clients are able to achieve a high level of understanding in their cloud journey. Their IT organizations are operating efficiently and are more agile in responding to the needs of business units. Phase 4 - Enterprise Cloud In the final phase of the cloud journey, the economics of the IT organizations change. Business units can request services on-demand; applications can be deployed and consumed on a pay-as-you-go model. OCS has the expertise and capabilities to establish processes, programs, and solutions required for IT organizations to transform how they interact with business units. The Promise of Cloud Solutions Depending the size and complexity of their business model, some clients are able to abbreviate some phases of their cloud journey. Cloud solutions are still evolving and there is rapid pace of innovation to transform how IT organizations operate. The lesson is clear. Cloud solutions hold a lot of promise for business agility. Business leaders can now leverage an additional set of capabilities and services. They can ramp up their pace of innovation. With cloud maturity, they can compete more effectively in their respective markets. But there are certainly challenges ahead. A skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. Oracle Consulting Services has expertise and a portfolio of services to help clients succeed on their journey to the cloud.

    Read the article

  • Please give the solution of the following programs in R Programming

    - by NEETHU
    Table below gives data concerning the performance of 28 national football league teams in 1976.It is suspected that the no. of yards gained rushing by opponents(x8) has an effect on the no. of games won by a team(y) (a)Fit a simple linear regression model relating games won by y to yards gained rushing by opponents x8. (b)Construct the analysis of variance table and test for significance of regression. (c)Find a 95% CI on the slope. (d)What percent of the total variability in y is explained by this model. (e)Find a 95% CI on the mean number of games won in opponents yards rushing is limited to 2000 yards. Team y x8 1 10 2205 2 11 2096 3 11 1847 4 13 1803 5 10 1457 6 11 1848 7 10 1564 8 11 1821 9 4 2577 10 2 2476 11 7 1984 12 10 1917 13 9 1761 14 9 1709 15 6 1901 16 5 2288 17 5 2072 18 5 2861 19 6 2411 20 4 2289 21 3 2203 22 3 2592 23 4 2053 24 10 1979 25 6 2048 26 8 1786 27 2 2876 28 0 2560 Suppose we would like to use the model developed in problem 1 to predict the no. of games a team will win if it can limit opponents yards rushing to 1800 yards. Find a point estimate of the no. of games won when x8=1800.Find a 905 prediction interval on the no. of games won. The purity of Oxygen produced by a fractionation process is thought to be percentage of Hydrocarbon in the main condenser of the processing unit .20 samples are shown below. Purity(%) Hydrocarbon(%) 86.91 1.02 89.85 1.11 90.28 1.43 86.34 1.11 92.58 1.01 87.33 0.95 86.29 1.11 91.86 0.87 95.61 1.43 89.86 1.02 96.73 1.46 99.42 1.55 98.66 1.55 96.07 1.55 93.65 1.4 87.31 1.15 95 1.01 96.85 0.99 85.2 0.95 90.56 0.98 (a)Fit a simple linear regression model to the data. (b)Test the hypothesis H0:ß=0 (c)Calculate R2 . (d)Find a 95% CI on the slope. (e)Find a 95% CI on the mean purity and the Hydrocarbon % is 1. Consider the Oxygen plant data in Problem3 and assume that purity and Hydrocarbon percentage are jointly normally distributed r.vs (a)What is the correlation between Oxygen purity and Hydrocarbon% (b)Test the hypothesis that ?=0. (c)Construct a 95% CI for ?.

    Read the article

  • ! Extra }, or forgotten \endgroup. latex

    - by gzou
    hey, I met these latex format problem, anyone can offer some help? the .tex file: \begin{table}{} \renewcommand{\arraystretch}{1.1} \caption{Cambridge Flow feature definition and description} \label{cambridge-feature}} \centering \begin{tabular}{|c|c|} \hline\bfseries Abbreviation &\bfseries Description\\ \hline serv-port & Server port\\ \hline clnt-port & Client port\\ \hline push-pkts-serv & count of all packets with\\ & push bit set in TCP header (server to client)\\ \hline init-win-bytes-clnt & the total number of bytes \\ & sent in initial window (client to server)\\ \hline init-win-bytes-serv & the total number of bytes sent\\ & in initial window (server to client)\\ \hline avg-seg-size-clnt & average segment size: \\ & data bytes devided by number of packets\\ \hline IP-bytes-med-clnt & median of total bytes in IP packet\\ \hline act-data-pkt-serv & count of packet with at least one byte \\ & of TCP data playload (server to client)\\ \hline data-bytes-var-clnt & variance of total \\ & bytes in packets (client to server)\\ \hline min-seg-size-serv & minimum segment size \\ & observed (server to client)\\ \hline RTT-samples-serv & total number of RTT samples\\ & found (server to client),\\ & {\bf see also \cite{Moore05discriminators}}\\ \hline push-pkts-clnt & count of all packets with push bit set \\ & in TCP header (server to client)\\ \hline \end{tabular} \end{table} and the error message: ! Extra }, or forgotten \endgroup. \@endfloatbox ...pagefalse \outer@nobreak \egroup \color@endbox l.892 \end{table} I've deleted a group-closing symbol because it seems to be spurious, as in $x}$'. But perhaps the } is legitimate and you forgot something else, as in\hbox{$x}'. In such cases the way to recover is to insert both the forgotten and the deleted material, e.g., by typing `I$}'. there is no $ in my table, also this { are matching with the }, and also after I comment the citation, the error remains. anyone can offer help? really appreciate all the comments! ! Extra }, or forgotten \endgroup.

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >