Search Results

Search found 5903 results on 237 pages for 'generic variance'.

Page 97/237 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • how to make game objects to contain other objects?

    - by user3161621
    Im pretty undecided about what path to take here. The simplest that comes to mind is just adding a field List to the GameObject class where i would store the things contained in it. But would it be elegant and well tought? And would it be good performance wise?(ofc 99.9% of my objects wouldnt have anything stored in them...) What about deriving a ContainerObject from GameObject? This way only they would have additional fields but then i would have to change many things, my collections of objects would have to become a generic and id have to cast very often into GameObject...

    Read the article

  • Can't install ubuntu on msi computer

    - by Uruz36
    I have been trying to install ubuntu along windows but failed. the issue is that I get a graphical glitch with the colours of the ubuntu loading screen while installing from usb and no progress is made, the pc freezes. I've tryied the nomodeset solution, but when I have to reboot I get the same problem. I've tried ubuntu 13.10, 13.04, 12.04 etc...both generic and for amd: nothing worked... any idea how this could be resolved or which distribution/iso I should try for my system? thank you. I have the following system: AMD FX-4170 Quad-Core 4.20 Ghz ram 8 GB MSI mainboard 760GM-P21 (FX) 1TB hard disk samsung led tv as monitor (I connect with HDMI) OS windows 7 64bit ultimate

    Read the article

  • Should my game handle collisions in the Player object?

    - by user1264811
    I'm making a 2D platform game. Right now I'm just working on making a very generic Player class. I'm wondering if it would be more efficient/better practice to have an ActionListener within the Player class to detect collisions with Enemy objects (also have an ActionListener) or to handle all the collisions in the main world. Furthermore, I'm thinking ahead about how I will handle collisions with the platforms themselves. I've looked into the double boolean arrays to see which tiles players can go to and which they can't. I don't understand how to use this class and the player class at the same time.

    Read the article

  • Ubuntu 11.10 boot: xhost: unable to open display

    - by paulus_almighty
    I've had this papercut for a while now, it's time it was fixed. When I boot up Ubuntu, choosing "Ubuntu...generic" from the grub screen, Ubuntu fails to load. It just sits at the driver/module loading screen. What seems to be the most significant line in this output is "xhost: unable to open display" If I choose "Ubuntu...(recovery mode)" from grub then it loads OK. I don't get why this is. Out of interest I tried enabling boot error logging with #/etc/default/bootlogd BOOTLOGD_ENABLE=Yes but I'm not seeing anything in that file. ETA: I've had this problem since fresh install of 11.10. Here's lshw: $ sudo lshw -C display *-display description: VGA compatible controller product: GF104 [GeForce GTX 460] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:03:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f6000000-f7ffffff memory:e0000000-e7ffffff memory:ec000000-efffffff ioport:bf00(size=128) memory:e8000000-e807ffff

    Read the article

  • Dell Inspiron N5110 Touchpad not detected

    - by Shahidh
    sanju@sanju-Inspiron-N5110:~$ xinput --list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Generic Mouse id=12 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? Laptop_Integrated_Webcam_HD id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ? Dell WMI hotkeys id=13 [slave keyboard (3)] As above my touchpad is not detected by the system. version is Ubuntu 12.04 Can anyone help me?

    Read the article

  • Do We Indeed Have a Future? George Takei on Star Wars.

    - by Bil Simser
    George Takei (rhymes with Okay), probably best known for playing Hikaru Sulu on the original Star Trek, has always had deep concerns for the present and the future. Whether on Earth or among the stars, he has the welfare of humanity very much at heart. I was digging through my old copies of Famous Monsters of Filmland, a great publication on monster and films that I grew up with, and came across this. This was his reaction to STAR WARS from issue 139 of Famous Monsters of Filmland and was written June 6, 1977. It is reprinted here without permission but I hope since the message is still valid to this day and has never been reprinted anywhere, nobody will mind me sharing it. STAR WARS is the most pre-posterously diverting galactic escape and at the same time the most hideously credible portent of the future yet.While I thrilled to the exploits that reminded me of the heroics of Errol Flynn as Robin Hood, Burt Lancaster as the Crimson Pirate and Buster Crabbe as Flash Gordon, I was at the same time aghast at the phantasmagoric violence technology can place at our disposal. STAR WARS raised in my mind the question - do we indeed have a future?It seems to me what George Lucas has done is to masterfully guide us on a journey through space and time and bring us back face to face with today's reality. STAR WARS is more than science fiction, I think it is science fictitious reality.Just yesterday, June 7, 1977, I read that the United States will embark on the production of a neutron bomb - a bomb that will kill people on a gigantic scale but will not destroy buildings. A few days before that, I read that the Pentagon is fearful that the Soviets may have developed a warhead that could neutralize ours that have a capacity for that irrational concept overkill to the nth power. Already, it seems we have the technology to realize the awesome special effects simulations that we saw in the film.The political scene of STAR WARS is that of government by force and power, of revolutions based on some unfathomable grievance, survival through a combination of cunning and luck and success by the harnessing of technology -  a picture not very much at variance from the political headlines that we read today.And most of all, look at the people; both the heroes in the film and the reaction of the audience. First, the heroes; Luke Skywalker is a pretty but easily led youth. Without any real philosophy to guide him, he easily falls under the influence of a mystical old man believed previously to be an eccentric hermit. Recognize a 1960's hippie or a 1970's moonie? Han Solo has a philosophy coupled with courage and skill. His philosophy is money. His proficiency comes for a price - the highest. Solo is a thoroughly avaricious mercenary. And the Princess, a decisive, strong, self-confident and chilly woman. The audience cheered when she wielded a gun. In all three, I missed qualities that could be called humane - love, kindness, yes, I missed sensuality. I also missed a sense of ideals and faith. In this regard the machines seemed more human. They demonstrated real affection for each other and an occasional poutiness. They exhibited a sense of fidelity and constancy. The machines were humanized and the humans conversely seemed mechanical.As a member of the audience, I was swept up by the sheer romantic escapsim of it all. The deering-dos, the rope swing escape across the pit, the ray gun battles and especially the swash buckle with the ray swords. Great fun!But I just hope that we weren't too intoxicated by the escapism to be able to focus on the recognizable. I hope the beauty of the effects didn't narcotize our sensitivity to violence. I hope the people see through the fantastically well done futuristic mirrors to the disquieting reflection of our own society. I hope they enjoy STAR WARS without being "purely entertained".

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Entity Framework with MySQL - Timeout Expired while Generating Model

    - by Nathan Taylor
    I've constructed a database in MySQL and I am attempting to map it out with Entity Framework, but I start running into "GenerateSSDLException"s whenever I try to add more than about 20 tables to the EF context. An exception of type 'Microsoft.Data.Entity.Design.VisualStudio.ModelWizard.Engine.ModelBuilderEngine+GenerateSSDLException' occurred while attempting to update from the database. The exception message is: 'An error occurred while executing the command definition. See the inner exception for details.' Fatal error encountered during command execution. Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. There's nothing special about the affected tables, and it's never the same table(s), it's just that after a certain (unspecific) number of tables have been added, the context can no longer be updated without the "Timeout expired" error. Sometimes it's only one table left over, and sometimes it's three; results are pretty unpredictable. Furthermore, the variance in the number of tables which can be added before the error indicates to me that perhaps the problem lies in the size of the query being generated to update the context which includes both the existing table definitions, and also the new tables that are being added to it. Essentially, the SQL query is getting too large and it's failing to execute for some reason. If I generate the model with EdmGen2 it works without any errors, but the generated EDMX file cannot be updated within Visual Studio without producing the aforementioned exception. In all likelihood the source of this problem lies in the tool within Visual Studio given that EdmGen2 works fine, but I'm hoping that perhaps others could offer some advice on how to approach this very unique issue, because it seems like I'm not the only person experiencing it. One suggestion a colleague offered was maintaining two separate EBMX files with some table crossover, but that seems like a pretty ugly fix in my opinion. I suppose this is what I get for trying to use "new technology". :(

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • Positioning problem in VIsta when Aero Theme is enabled

    - by Adeel Rehman
    Hi, I have a Windows form which has tab pages. On a Tab Page, I have a Combo Box. When I click the combobox a drop down opens up just below the combo box to give the impression of a drop down. The drop down is a windows form and this is how I set its position Popup.Location = control.PointToScreen(parentcontrol.PointToClient(new System.Drawing.Point(0, control.Height))); Popup = drop down that opens below the combo box (Windows Form) control = my combo box (Combo Box) parentcontrol = the windows form on which the control is present (Parent Form) Problem:- The X coordinate is not plotted correctly on the screen with Aero theme enabled. This works perfectly fine on XP (with some variance in y due to tablelayout panel i assume). But when i use the same code on Vista with Aero theme enabled the x-cordinate wanders away about 20-30 pixels. If I Turn Aero theme off on Vista, it works fine. I have found the X and Y coordiantes calculated in both cases are the same. But the way Vista draws these coordinates on the screen (when aero theme is enabled) is different. Is there any solution to this problem? Thanks, Adeel Rehman.

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • R: How to remove outliers from a smoother in ggplot2?

    - by John
    I have the following data set that I am trying to plot with ggplot2, it is a time series of three experiments A1, B1 and C1 and each experiment had three replicates. I am trying to add a stat which detects and removes outliers before returning a smoother (mean and variance?). I have written my own outlier function (not shown) but I expect there is already a function to do this, I just have not found it. I've looked at stat_sum_df("median_hilow", geom = "smooth") from some examples in the ggplot2 book, but I didn't understand the help doc from Hmisc to see if it removes outliers or not. Is there a function to remove outliers like this in ggplot, or where would I amend my code below to add my own function? library (ggplot2) data = data.frame (day = c(1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7), od = c( 0.1,1.0,0.5,0.7 ,0.13,0.33,0.54,0.76 ,0.1,0.35,0.54,0.73 ,1.3,1.5,1.75,1.7 ,1.3,1.3,1.0,1.6 ,1.7,1.6,1.75,1.7 ,2.1,2.3,2.5,2.7 ,2.5,2.6,2.6,2.8 ,2.3,2.5,2.8,3.8), series_id = c( "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1"), replicate = c( "A1.1","A1.1","A1.1","A1.1", "A1.2","A1.2","A1.2","A1.2", "A1.3","A1.3","A1.3","A1.3", "B1.1","B1.1","B1.1","B1.1", "B1.2","B1.2","B1.2","B1.2", "B1.3","B1.3","B1.3","B1.3", "C1.1","C1.1","C1.1","C1.1", "C1.2","C1.2","C1.2","C1.2", "C1.3","C1.3","C1.3","C1.3")) > data day od series_id replicate 1 1 0.10 A1 A1.1 2 3 1.00 A1 A1.1 3 5 0.50 A1 A1.1 4 7 0.70 A1 A1.1 5 1 0.13 A1 A1.2 6 3 0.33 A1 A1.2 7 5 0.54 A1 A1.2 8 7 0.76 A1 A1.2 9 1 0.10 A1 A1.3 10 3 0.35 A1 A1.3 11 5 0.54 A1 A1.3 12 7 0.73 A1 A1.3 13 1 1.30 B1 B1.1 This is what I have so far and is working nicely, but outliers are not removed: r <- ggplot(data = data, aes(x = day, y = od)) r + geom_point(aes(group = replicate, color = series_id)) + # add points geom_line(aes(group = replicate, color = series_id)) + # add lines geom_smooth(aes(group = series_id)) # add smoother, average of each replicate

    Read the article

  • Generating lognormally distributed random number from mean, coeff of variation

    - by Richie Cotton
    Most functions for generating lognormally distributed random numbers take the mean and standard deviation of the associated normal distribution as parameters. My problem is that I only know the mean and the coefficient of variation of the lognormal distribution. It is reasonably straight forward to derive the parameters I need for the standard functions from what I have: If mu and sigma are the mean and standard deviation of the associated normal distribution, we know that coeffOfVar^2 = variance / mean^2 = (exp(sigma^2) - 1) * exp(2*mu + sigma^2) / exp(mu + sigma^2/2)^2 = exp(sigma^2) - 1 We can rearrange this to sigma = sqrt(log(coeffOfVar^2 + 1)) We also know that mean = exp(mu + sigma^2/2) This rearranges to mu = log(mean) - sigma^2/2 Here's my R implementation rlnorm0 <- function(mean, coeffOfVar, n = 1e6) { sigma <- sqrt(log(coeffOfVar^2 + 1)) mu <- log(mean) - sigma^2 / 2 rlnorm(n, mu, sigma) } It works okay for small coefficients of variation r1 <- rlnorm0(2, 0.5) mean(r1) # 2.000095 sd(r1) / mean(r1) # 0.4998437 But not for larger values r2 <- rlnorm0(2, 50) mean(r2) # 2.048509 sd(r2) / mean(r2) # 68.55871 To check that it wasn't an R-specific issue, I reimplemented it in MATLAB. (Uses stats toolbox.) function y = lognrnd0(mean, coeffOfVar, sizeOut) if nargin < 3 || isempty(sizeOut) sizeOut = [1e6 1]; end sigma = sqrt(log(coeffOfVar.^2 + 1)); mu = log(mean) - sigma.^2 ./ 2; y = lognrnd(mu, sigma, sizeOut); end r1 = lognrnd0(2, 0.5); mean(r1) % 2.0013 std(r1) ./ mean(r1) % 0.5008 r2 = lognrnd0(2, 50); mean(r2) % 1.9611 std(r2) ./ mean(r2) % 22.61 Same problem. The question is, why is this happening? Is it just that the standard deviation is not robust when the variation is that wide? Or have a screwed up somewhere?

    Read the article

  • accumulator don't compile

    - by Abruzzo Forte e Gentile
    HI All I am using boost accumulators. These 2 lines use to work fine with current version of boost in LInux. accumulator_set< double, stats< tag::covariance<double, tag::covariate1> > > acc_cov; accumulator_set< double, stats< tag::variance > > acc_var; When I moved to a Sun machine where it is installed boost v1.40 I have this building error "/opt/boost/boost/accumulators/framework/depends_on.hpp", line 276: Error:<no tag> cannot be initialized in a constructor. "/opt/boost/boost/fusion/container/list/cons.hpp", line 85: Where: While instantiating "boost::accumulators::detail::accumulator_wrapper<int, int>::accumulator_wrapper(const boost::accumulators::detail::accumulator_wrapper<int, int>&)". "/opt/boost/boost/fusion/container/list/cons.hpp", line 85: Where: Instantiated from non-template code. 1 Error(s) Do you know how can I fix those errors and why I have this issue? Thanks AFG

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • Robust DateTime parser library for .NET

    - by Frank Krueger
    Hello, I am writing an RSS and Mail reader app in C# (technically MonoTouch). I have run into the issue of parsing DateTimes. I see a lot of variance in how dates are presented in the wild and have begun writing a function like this: public static DateTime ParseTime(string timeStr) { var formats = new string[] { "ddd, d MMM yyyy H:mm:ss \"GMT+00:00\"", "d MMM yyyy H:mm:ss \"EST\"", "yyyy-MM-dd\"T\"HH:mm:ss\"Z\"", "ddd MMM d HH:mm:ss \"+0000\" yyyy", }; try { return DateTime.Parse(timeStr); } catch (Exception) { } foreach (var f in formats) { try { var t = DateTime.ParseExact(timeStr, f, CultureInfo.InvariantCulture); return t; } catch (Exception) { } } return DateTime.MinValue; } This, well, makes me sick. Three points. (1) It's silly of me to think that I can actually collect a format list that will cover everything out there. (2) It's wrong! Notice that I'm treating an EST date time as UTC (since .NET seems oblivious to time zones). (3) I don't like using exceptions for logic. I am looking for an existing library (source only please) that is known to handle a bunch of these formats. Also, I would like to keep using UTC DateTimes throughout my code so whatever library is suggested should be able to produce DateTimes. Is there anything out there like this?

    Read the article

  • Weird stuttering issues not related to GC.

    - by Smills
    I am getting some odd stuttering issues with my game even though my FPS never seems to drop below 30. About every 5 seconds my game stutters. I was originally getting stuttering every 1-2 seconds due to my garbage collection issues, but I have sorted those and will often go 15-20 seconds without a garbage collection. Despite this, my game still stutters periodically even when there is no GC listed in logcat anywhere near the stutter. Even when I take out most of my code and simply make my "physics" code the below code I get this weird slowdown issue. I feel that I am missing something or overlooking something. Shouldn't that "elapsed" code that I put in stop any variance in the speed of the main character related to changes in FPS? Any input/theories would be awesome. Physics: private void updatePhysics() { //get current time long now = System.currentTimeMillis(); //added this to see if I could speed it up, it made no difference Thread myThread = Thread.currentThread(); myThread.setPriority(Thread.MAX_PRIORITY); //work out elapsed time since last frame in seconds double elapsed = (now - mLastTime2) / 1000.0; mLastTime2 = now; //measures FPS and displays in logcat once every 30 frames fps+=1/elapsed; fpscount+=1; if (fpscount==30) { fps=fps/fpscount; Log.i("myActivity","FPS: "+fps+" Touch: "+touch); fpscount=0; } //this should make the main character (theoretically) move upwards at a steady pace mY-=100*elapsed; //increase amount I translate the draw to = main characters Y //location if the main character goes upwards if (mY<=viewY) { viewY=mY; } }

    Read the article

  • How do I get 2-way data binding to work for nested asp.net Repeater controls

    - by jimblanchard
    I have the following (trimmed) markup: <asp:Repeater ID="CostCategoryRepeater" runat="server"> <ItemTemplate> <div class="costCategory"> <asp:Repeater ID="CostRepeater" runat="server" DataSource='<%# Eval("Costs")%>'> <ItemTemplate> <tr class="oddCostRows"> <td class="costItemTextRight"><span><%# Eval("Variance", "{0:c0}")%></span></td> <td class="costItemTextRight"><input id="SupplementAmount" class="costEntryRight" type="text" value='<%# Bind("SupplementAmount")%>' runat="server" /></td> </tr> </ItemTemplate> </asp:Repeater> </div> </ItemTemplate> </asp:Repeater> The outer repeater's DataSource is set in the code-beside. I've snipped them, but there are Eval statements that wire up to the properties in the outer Repeater. Anyway, one of the fields in the inner Repeater needs to be a Bind instead of an Eval, as I want to get the values that the user types in. The SupplementAmount input element correctly receives it's value when the page loads, but on the other side, when I inspect the contents of the Costs List when the form posts back, the changes the user made aren't present. Thanks.

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • Fill lower matrix with vector by row, not column

    - by mhermans
    I am trying to read in a variance-covariance matrix written out by LISREL in the following format in a plain text, whitespace separated file: 0.23675E+01 0.86752E+00 0.28675E+01 -0.36190E+00 -0.36190E+00 0.25381E+01 -0.32571E+00 -0.32571E+00 0.84425E+00 0.25598E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.21120E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.91200E+00 0.21120E+01 This is actually a lower diagonal matrix (including diagonal): 0.23675E+01 0.86752E+00 0.28675E+01 -0.36190E+00 -0.36190E+00 0.25381E+01 -0.32571E+00 -0.32571E+00 0.84425E+00 0.25598E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.21120E+01 -0.37680E+00 -0.37680E+00 0.53136E+00 0.47822E+00 0.91200E+00 0.21120E+01 I can read in the values correctly with scan() or read.table(fill=T). I am however not able to correctly store the read-in vector in a matrix. The following code S <- diag(6) S[lower.tri(S,diag=T)] <- d fills the lower matrix by column, while it should fill it by row. Using matrix() does allow for the option byrow=TRUE, but this will fill in the whole matrix, not just the lower half (with diagonal). Is it possible to have both: only fill the lower matrix (with diagonal) and do it by row? (separate issue I'm having: LISREL uses 'D+01' while R only recognises 'E+01' for scientific notation. Can you change this in R to accept also 'D'?)

    Read the article

  • Domain queries in CQRS

    - by JontyMC
    We are trying out CQRS. We have a validation situation where a CustomerService (domain service) needs to know whether or not a Customer exists. Customers are unique by their email address. Our Customer repository (a generic repository) only has Get(id) and Add(customer). How should the CustomerService find out if the Customer exists?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >