Search Results

Search found 685 results on 28 pages for 'experiment'.

Page 19/28 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Child objects in MongoDB

    - by Jeremy B.
    I have been following along with Rob Conery's Linq for MongoDB and have come across a question. In the example he shows how you can easily nest a child object. For my current experiment I have the following structure. class Content { ... Profile Profile { get; set; } } class Profile { ... } This works great when looking at content items. The dilemma I'm facing now is if I want to treat the Profile as an atomic object. As it stands, it appears as if I can not query the Profile object directly but that it comes packaged with Content results. If I want it to be inclusive, but also be able to query on just Profile I feel like my first instinct would be to make Profiles a top level object and then create a foreign key like structure under the Content class to tie the two together. To me it feels like I'm falling back on RDBMS practices and that feels like I'm most likely going against the spirit of Mongo. How would you treat an object you need to act upon independently yet also want as a child object of another object?

    Read the article

  • PyQt and unittest - how to handle signals and slots

    - by Einar
    Hello, some small application I'm developing uses a module I have written to check certain web services via a REST API. I've been trying to add unit tests to it so I don't break stuff, and I stumbled upon a problem. I use a lot of signal-slot connections to perform operations asynchronously. For example a typical test would be (pseudo-Python), with postDataDownloaded as a signal: def testConnection(self): "Test connection and posts retrieved" def length_test(): self.assertEqual(len(self.client.post_data), 5) self.client.postDataReady.connect(length_test) self.client.get_post_list(limit=5) Now, unittest will report this test to be "ok" when running, regardless of the result (as another slot is being called), even if asserts fail (I will get an unhandled AssertionError). Example when deliberatiely making the test fail: Test connection and posts retrieved ... ok [... more tests...] OK Traceback (most recent call last): [...] AssertionError: 4 != 5 The slot inside the test is merely an experiment: I get the same results if it's outside (instance method). I also have to add that the various methods I'm calling all make HTTP requests, which means they take a bit of time (I need to mock the request - in the mean time I'm using SimpleHTTPServer to fake the connections and give them proper data). Is there a way around this problem?

    Read the article

  • Core Data migration of to-one relationship to to-many relationship

    - by westsider
    I have a deployed app that samples measurements from sensors (e.g., Temp °C, Pressure kPa). The user can create Experiments and collect samples. Each sample is stored as a Run, such that there is a one-to-many relationship from Experiment to Run. In the interest of performance, Run has a to-one relationship with Data entity (which is where the actual raw data is stored); this allows some Run attributes to be loaded without necessarily loading lots of data. Most of our sensors have multiple measurements, so it would be nice to store all the data that is actually being sampled. But this means that the Run <--- Data relationship needs to become Run <-- Data (to use Xcode's convention). I am faced with trying to migrate data from old Run to-one Data model to new Run to-many Data model. Can this be done using Mapping Models? If so, does anyone have any pointers to examples? If not, does anyone have any pointers to examples of how to do that? Thanks for any pointers or advice.

    Read the article

  • Can I embed a custom font in an iPhone application?

    - by Airsource Ltd
    I would like to have an app include a custom font for rendering text, load it, and then use it with standard UIKit elements like UILabel. Is this possible? I found these links: http://discussions.apple.com/thread.jspa?messageID=8304744 http://forums.macrumors.com/showthread.php?t=569311 but these would require me to render each glyph myself, which is a bit too much like hard work, especially for multi-line text. I've also found posts that say straight out that it's not possible, but without justification, so I'm looking for a definitive answer. EDIT - failed -[UIFont fontWithName:size:] experiment I downloaded Harrowprint.tff (downloaded from here) and added it to my Resources directory and to the project. I then tried this code: UIFont* font = [UIFont fontWithName:@"Harrowprint" size:20]; which resulted in an exception being thrown. Looking at the TTF file in Finder confirmed that the font name was Harrowprint. EDIT - there have been a number of replies so far which tell me to read the documentation on X or Y. I've experimented extensively with all of these, and got nowhere. In one case, X turned out to be relevant only on OS X, not on iPhone. Consequently I am setting a bounty for this question, and I will award the bounty to the first person who provides an answer (using only documented APIs) who responds with sufficient information to get this working on the device. Working on the simulator too would be a bonus. EDIT - it appears that the bounty auto-awards to the answer with the highest nunber of votes. Interesting. No one actually provided an answer that solved the question as asked - the solution that involves coding your own UILabel subclass doesn't support word-wrap, which is an essential feature for me - though I guess I could extend it to do so.

    Read the article

  • Starting out NLP - Python + large data set

    - by pencilNero
    Hi, I've been wanting to learn python and do some NLP, so have finally gotten round to starting. Downloaded the english wikipedia mirror for a nice chunky dataset to start on, and have been playing around a bit, at this stage just getting some of it into a sqlite db (havent worked with dbs in the past unfort). But I'm guessing sqlite is not the way to go for a full blown nlp project(/experiment :) - what would be the sort of things I should look at ? HBase (.. and hadoop) seem interesting, i guess i could run then im java, prototype in python and maybe migrate the really slow bits to java... alternatively just run Mysql.. but the dataset is 12gb, i wonder if that will be a problem? Also looked at lucene, but not sure how (other than breaking the wiki articles into chunks) i'd get that to work.. What comes to mind for a really flexible NLP platform (i dont really know at this stage WHAT i want to do.. just want to learn large scale lang analysis tbh) ? Many thanks.

    Read the article

  • Recursively adding threads to a Java thread pool

    - by Leith
    I am working on a tutorial for my Java concurrency course. The objective is to use thread pools to compute prime numbers in parallel. The design is based on the Sieve of Eratosthenes. It has an array of n bools, where n is the largest integer you are checking, and each element in the array represents one integer. True is prime, false is non prime, and the array is initially all true. A thread pool is used with a fixed number of threads (we are supposed to experiment with the number of threads in the pool and observe the performance). A thread is given a integer multiple to process. The thread then finds the first true element in the array that is not a multiple of thread's integer. The thread then creates a new thread on the thread pool which is given the found number. After a new thread is formed, the existing thread then continues to set all multiples of it's integer in the array to false. The main program thread starts the first thread with the integer '2', and then waits for all spawned threads to finish. It then spits out the prime numbers and the time taken to compute. The issue I have is that the more threads there are in the thread pool, the slower it takes with 1 thread being the fastest. It should be getting faster not slower! All the stuff on the internet about Java thread pools create n worker threads the main thread then wait for all threads to finish. The method I use is recursive as a worker can spawn more worker threads. I would like to know what is going wrong, and if Java thread pools can be used recursively.

    Read the article

  • How to find out if an object's type implements IEnumerable<X> where X derives from Base using Reflec

    - by Dave Van den Eynde
    Give a base class "Base", I want to write a method Test, like this: private static bool Test(IEnumerable enumerable) { ... } such that Test returns true if the type of o implements any interface of IEnumerable where X derives from Base, so that if I would do this: public static IEnumerable<string> Convert(IEnumerable enumerable) { if (Test(enumerable)) { return enumerable.Cast<Base>().Select(b => b.SomePropertyThatIsString); } return enumerable.Cast<object>().Select(o => o.ToString()); } ...that it would do the right thing, using Reflection. I'm sure that its a matter of walking across all the interfaces of the type to find the first that matches the requirements, but I'm having a hard time finding the generic IEnumerable< among them. Of course, I could consider this: public static IEnumerable<string> Convert(IEnumerable enumerable) { return enumerable.Cast<object>().Select(o => o is Base ? ((Base)o).SomePropertyThatIsString : o.ToString()); } ...but think of it as a thought experiment.

    Read the article

  • Designer serialization persistence problem in .NET, Windows Forms

    - by Jules
    ETA: I have a similar, smaller, problem here which, I suspect, is related to this problem. I have a class which has a readonly property that holds a collection of components (* not quite, see below). At design time, it's possible to select from the components on the design surface to add to the collection. (Think imagelist, but instead of selecting one, you can select as many as you want.) As a test, I inherit from button and attach my class to it as a property. The persistence problem occurs when I add a component,to the collection, from the design surface after I have added my button to the form. The best way to demonstrate this is to show you the designer generated code: Private Sub InitializeComponent() Dim Provider1 As WindowsApplication1.Provider = New WindowsApplication1.Provider Me.MyComponent2 = New WindowsApplication1.MyComponent Me.MyComponent1 = New WindowsApplication1.MyComponent Me.MyButton1 = New WindowsApplication1.MyButton Me.MyComponent3 = New WindowsApplication1.MyComponent Me.SuspendLayout() ' 'MyButton1 ' Me.MyButton1.ProviderCollection.Add(Me.MyButton1.InternalProvider) Me.MyButton1.ProviderCollection.Add(Me.MyComponent1.Provider) Me.MyButton1.ProviderCollection.Add(Me.MyComponent2.Provider) Me.MyButton1.ProviderCollection.Add(Provider1) //Wrong should be Me.MyComponent3.Provider ' 'Form1 ' Me.Controls.Add(Me.MyButton1) End Sub Friend WithEvents MyComponent1 As WindowsApplication1.MyComponent Friend WithEvents MyComponent2 As WindowsApplication1.MyComponent Friend WithEvents MyButton1 As WindowsApplication1.MyButton Friend WithEvents MyComponent3 As WindowsApplication1.MyComponent End Class As you can see from the code, the collection is not actually a collection of the components, but a collection of a property, 'Provider', from the components. It looks like the problem is occurring because MyComponent3 is created after MyButton. However, in my opinion, this should not make any difference - by the time the serializer comes to add the provider property of MyComponent3, it's already created. Note: You may wonder, why I'm not using AddRange to persist the collection. The reason for this is that if I do, the behaviour changes and none of the items will persist correctly. The designer will create local fields - like Provider1 - for each item in the collection. However if I add another collection to the class which holds the actual MyComponents and persist this, then, somehow, the AddRange method persists correctly in ProviderCollection! There seems to be some kind of quantum double slit experiment going down in code dom. How can I solve this problem?

    Read the article

  • Android java.lang.VerifyError for private method with annotated argument.

    - by alex2k8
    I have a very simple project that compiles, but can't be started on Emulator. The problem is with this method: private void bar(@Some String a) {} // java.lang.VerifyError The issue can be avoided if annotation removed private void bar(String a) {} // OK or the method visibility changed: void bar(@Some String a) {} // OK public void bar(@Some String a) {} // OK protected void bar(@Some String a) {} // OK Any idea what is wrong with original method? Is this a dalvik bug, or? If some one whould like to experiment with code, here it is: Test.java: public class Test { private void bar(@Some String a) {} public void foo() { bar(null); } } Some.java: public @interface Some {} MainActivity.java: public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); new Test().foo(); } } Stack trace: ERROR/dalvikvm(1358): Could not find method com.my.Test.bar, referenced from method com.my.Test.foo WARN/dalvikvm(1358): VFY: unable to resolve direct method 11: Lcom/my/Test;.bar (Ljava/lang/String;)V WARN/dalvikvm(1358): VFY: rejecting opcode 0x70 at 0x0001 WARN/dalvikvm(1358): VFY: rejected Lcom/my/Test;.foo ()V WARN/dalvikvm(1358): Verifier rejected class Lcom/my/Test; DEBUG/AndroidRuntime(1358): Shutting down VM WARN/dalvikvm(1358): threadid=3: thread exiting with uncaught exception (group=0x4000fe70) ERROR/AndroidRuntime(1358): Uncaught handler: thread main exiting due to uncaught exception ERROR/AndroidRuntime(1358): java.lang.VerifyError: com.my.Test ERROR/AndroidRuntime(1358): at com.my.MainActivity.onCreate(MainActivity.java:13) ERROR/AndroidRuntime(1358): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2231) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2284) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.access$1800(ActivityThread.java:112) ERROR/AndroidRuntime(1358): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1692) ERROR/AndroidRuntime(1358): at android.os.Handler.dispatchMessage(Handler.java:99) ERROR/AndroidRuntime(1358): at android.os.Looper.loop(Looper.java:123) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.main(ActivityThread.java:3948) ERROR/AndroidRuntime(1358): at java.lang.reflect.Method.invokeNative(Native Method) ERROR/AndroidRuntime(1358): at java.lang.reflect.Method.invoke(Method.java:521) ERROR/AndroidRuntime(1358): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:782) ERROR/AndroidRuntime(1358): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:540) ERROR/AndroidRuntime(1358): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • WPF Sizing of Panels

    - by Mystagogue
    I'm looking for an article or overview of WPF Panel types, that explains the sizing characters of each. For example, here are the panel types: http://msdn.microsoft.com/en-us/library/ms754152.aspx#Panels_derived_elements I've learned (by experiment) that UniformGrid can be given a fixed height, or it can be "auto" where it expands to fit available space. That is great, but what I wanted was for Uniform Grid to shrink to fit its internal content (particularly content that is provided dynamically, at run-time). I don't think it has that ability. So I'd like to know either what other Panel I could use for that purpose, or what Panel I should nest the UniformGrid inside of. But I don't just want an answer to that specific question. I want the sizing dynamics and capabilities of all the Panel types, in summary form, so I can make all these choices as needed. Online I find articles that cover only half the Panel types, and don't give as much information about sizing as I'm describing. Anyone know the link (or book) that has the info I'm seeking? p.s. Since I want the UniformGrid to shrink to the dynamic content I'm providing, I could just keep track of the total height of controls placed within, and then set the height of the UniformGrid. But it would be nice if WPF took care of this for me.

    Read the article

  • Can someone look over the curriculum for this major & give me your thoughts? Computing & Security Te

    - by scottsharpejr
    My goal is to become a good web developer. I'm interested in learning how to build complex websites as well as how to write web applications. I want skills that will enable me to write apps for <--insert hottest web trend here-- (Facebook & iphone apps for example) This is one of my goals as far as Tech. is concerned. I'd also like to have a brod knowledge of different areas of IT. I'm looking into majoring in "Computing & Security Technology". The program is offered by Drexel in conjunction with my CC. It's a 4 year degree. Can someone take a look @ the pdf below. It outlines every course I must take. http://www.drexelatbcc.org/academics/PDF/CST_CT.pdf For degree requirments w/ links to course descriptiongs see drexel.edu/catalog/degree/ct.htm With electives I can go up to Web Development 4. Based on my goals of Web development & wanting a well rounding education in information technology, what do you think of the curriculum? How will I fare entering the job market with this degree? My goals here are a little different. I'd like to work for 2 to 3 companies over the course of 6-7 years. Working with and learning different areas of IT. I'd like to stay with a company an average of 2-3 years before moving on. My end goal is to go into business for myself (IT related). I appreciate any and all advice the community here can give me! :) Could someone also explain to me their interpretation of this major? thanks! P.S. I already know XHTML & CSS. I am just now starting to experiment with PHP.

    Read the article

  • Custom Cocoa Framework and a problem using it

    - by happyCoding25
    Hello, I made a custom cocoa framework just to experiment and find the best way to make one but ran in to a problem using it. The framework project builds and compiles just fine, but when I use it in an xcode project I get the error, 'LogTest' undeclared. The name of the framework is LogTest Heres the code to my app that uses the framework: AppDelegate.h: #import <Cocoa/Cocoa.h> #import <LogTest/LogTest.h> @interface TestAppDelegate : NSObject <NSApplicationDelegate> { NSWindow *window; } @property (assign) IBOutlet NSWindow *window; @end AppDelegate.m: #import "TestAppDelegate.h" @implementation TestAppDelegate @synthesize window; - (void)awakeFromNib { [LogTest logStart:@"testing 123":@"testing 1234"]; //This is the line where the error occurs } @end Framework Code........ LogTest.h: #import <Cocoa/Cocoa.h> #import "Method.h" @protocol LogTest //Not sure if this is needed I just wanted a blank header @end Method.h: #import <Cocoa/Cocoa.h> @interface Method : NSObject { } + (void)logStart:(NSString *)test:(NSString *)test2; @end Method.m: #import "Method.h" @implementation Method + (void)logStart:(NSString *)test:(NSString *)test2 { NSLog(test); NSLog(test2); } @end If anyone knows why I am getting this error please reply. Thanks for any help

    Read the article

  • GD PHP Base64 Picture (png) error

    - by hogofwar
    This is part of my code: $con = mysql_connect("localhost","username","passs"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("database", $con); if(mysql_num_rows(mysql_query("SELECT name FROM xbox_user WHERE name = '$user'"))){ // Code inside if block if userid is already there $result = mysql_query("SELECT name FROM xbox_user WHERE name = '$user'"); while($row = mysql_fetch_array($result)) { if ($row['date'] > $row['date']+100){ $src = imagecreatefrompng($result['XboxInfo']['TileUrl']); $base64= base64_encode(file_get_contents($result['XboxInfo']['TileUrl'])); $date = date("Ymd"); mysql_query("UPDATE xbox_user SET date = '$date' SET avatar = '$base64' WHERE name = '$user'"); }else{ $encode = $row['avatar']; //echo $encode; $rand = rand(1, 1337); file_put_contents('/tmp/'.$rand.'.png', base64_decode($row['avatar'])); //ERROR LINE $src = imagecreatefrompng('/tmp/'.$rand.'.png'); unlink('/tmp/'.$rand.'.png'); } } }else{ $src = imagecreatefrompng($result['XboxInfo']['TileUrl']); $base64= base64_encode(file_get_contents($result['XboxInfo']['TileUrl'])); $date = date("Ymd"); mysql_query("INSERT INTO xbox_user (name, avatar, date) VALUES ('$user', '$base64', '$date')"); } It comes up with multiple errors but I feel this one should be addressed first as the other could just be caused by the first error: Warning: imagecreatefrompng() [function.imagecreatefrompng]: '/tmp/628.png' is not a valid PNG file in /home/nah/public_html/experiment/xbox/draw3.php on line 60 It also does create an entry in my mysql DB

    Read the article

  • advanced Visual Studio kung-fu test -- Calling functions from the Immediate Window during debugging

    - by kizzx2
    I see some related questions have been asked, but they're either too advanced for me to grasp or lacking a step-by-step guide from start to finish (most of them end up being insider talk of their own experiment results). OK here it is, given this simple program: #include <stdio.h> #include <string.h> int main() { FILE * f; char buffer[100]; memset(buffer, 0, 100); fun(); f = fopen("main.cpp", "r"); fread(buffer, 1, 99, f); printf(buffer); fclose(f); return 0; } What it does is basically print itself (assume file name is main.cpp). Question How can I have it print another file, say foobar.txt without modifying the source code? It has something to do with running it through VS's, stepping through the functions and hijacking the FILE pointer right before fread() is called. No need to worry about leaking resources by calling fclose(). I tried the simple f = fopen("foobar.txt", "r") which gave CXX0017: Error: symbol "fopen" not found Any ideas? Edit I found out the solution accidentally on Debugging Mozilla on Windows FAQ. The correct command to put into the Immediate Window is f = {,,MSVCR100D}fopen("foo.txt", "r") However, it doesn't really answer this question: I still don't understand what is going on here. How to systematically find out the {,,MSVCR100D} part for any given method? I know the MSVCR version changes from system to system. How can I find that out? Could anyone explain the curly brace syntax, especially, what are those two commas doing there? Are there more hidden gems using this syntax?

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • Compressing xls content with apache deflate module

    - by Clinton Bosch
    I am trying to compress an excel spreadsheet being sent from my application using apache deflate module. I have added the following line to the my sites-enabled file: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/excel But is seems to make the response data bigger??? Using firebug, without the module I downloaded the xls spreadsheet from the application and it downloaded 100Kb of data, the file size once on the filesystem was also 100Kb as expected. Once I enabled the deflate module as described above and repeated the process, the amount of data downloaded was 295Kb?? but the file was still only 100Kb once save on the filesystem. As an experiment I manually gzipped the saved xls file and it compressed to 20Kb. What am I doing wrong here? Using deflate (Firebug output): 200 OK xxxxxxx.co.za 293 KB 4.43s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:01:43 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Vary Accept-Encoding Content-Encoding gzip Content-Type application/excel Without deflate (Firebug output): 200 OK xxxxxxxx.co.za 100 KB 3.46s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:06:00 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Content-Length 102912 Content-Type application/excel

    Read the article

  • Email Collector / Implementation

    - by Tian
    I am implementing a simple RoR webpage that collect emails from visitors and store them as objects. I'm using it as a mini-project to try RoR and BDD. I can think of 3 features for Cucumber: 1. User submits a valid email address 2. User submits an existing email address 3. User submits an invalid email My question is, for scenarios 2 and 3, is it better to handle this via the controller? or as methods in a class? Perhaps something that throws errors if an instance is instantiated in sceanrio 2 or 3? Implementation is below, love to hear some code reviews in addition to answers to questions above. Thanks! MODEL: class Contact < ActiveRecord::Base attr_accessor :email end VIEW: <h1>Welcome To My Experiment</h1> <p>Find me in app/views/welcome/index.html.erb</p> <%= flash[:notice] %> <% form_for @contact, :url => {:action => "index"} do |f| %> <%= f.label :email %><br /> <%= f.text_field :email %> <%= submit_tag 'Submit' %> <% end %> CONTROLLER: class WelcomeController < ApplicationController def index @contact = Contact.new unless params[:contact].nil? @contact = Contact.create!(params[:contact]) flash[:notice] = "Thank you for your interest, please check your mailbox for confirmation" end end end

    Read the article

  • R: How to remove outliers from a smoother in ggplot2?

    - by John
    I have the following data set that I am trying to plot with ggplot2, it is a time series of three experiments A1, B1 and C1 and each experiment had three replicates. I am trying to add a stat which detects and removes outliers before returning a smoother (mean and variance?). I have written my own outlier function (not shown) but I expect there is already a function to do this, I just have not found it. I've looked at stat_sum_df("median_hilow", geom = "smooth") from some examples in the ggplot2 book, but I didn't understand the help doc from Hmisc to see if it removes outliers or not. Is there a function to remove outliers like this in ggplot, or where would I amend my code below to add my own function? library (ggplot2) data = data.frame (day = c(1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7,1,3,5,7), od = c( 0.1,1.0,0.5,0.7 ,0.13,0.33,0.54,0.76 ,0.1,0.35,0.54,0.73 ,1.3,1.5,1.75,1.7 ,1.3,1.3,1.0,1.6 ,1.7,1.6,1.75,1.7 ,2.1,2.3,2.5,2.7 ,2.5,2.6,2.6,2.8 ,2.3,2.5,2.8,3.8), series_id = c( "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "A1", "A1", "A1","A1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "B1", "B1","B1", "B1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1", "C1","C1", "C1", "C1"), replicate = c( "A1.1","A1.1","A1.1","A1.1", "A1.2","A1.2","A1.2","A1.2", "A1.3","A1.3","A1.3","A1.3", "B1.1","B1.1","B1.1","B1.1", "B1.2","B1.2","B1.2","B1.2", "B1.3","B1.3","B1.3","B1.3", "C1.1","C1.1","C1.1","C1.1", "C1.2","C1.2","C1.2","C1.2", "C1.3","C1.3","C1.3","C1.3")) > data day od series_id replicate 1 1 0.10 A1 A1.1 2 3 1.00 A1 A1.1 3 5 0.50 A1 A1.1 4 7 0.70 A1 A1.1 5 1 0.13 A1 A1.2 6 3 0.33 A1 A1.2 7 5 0.54 A1 A1.2 8 7 0.76 A1 A1.2 9 1 0.10 A1 A1.3 10 3 0.35 A1 A1.3 11 5 0.54 A1 A1.3 12 7 0.73 A1 A1.3 13 1 1.30 B1 B1.1 This is what I have so far and is working nicely, but outliers are not removed: r <- ggplot(data = data, aes(x = day, y = od)) r + geom_point(aes(group = replicate, color = series_id)) + # add points geom_line(aes(group = replicate, color = series_id)) + # add lines geom_smooth(aes(group = series_id)) # add smoother, average of each replicate

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • Ruby and duck typing: design by contract impossible?

    - by davetron5000
    Method signature in Java: public List<String> getFilesIn(List<File> directories) similar one in ruby def get_files_in(directories) In the case of Java, the type system gives me information about what the method expects and delivers. In Ruby's case, I have no clue what I'm supposed to pass in, or what I'll expect to receive. In Java, the object must formally implement the interface. In Ruby, the object being passed in must respond to whatever methods are called in the method defined here. This seems highly problematic: Even with 100% accurate, up-to-date documentation, the Ruby code has to essentially expose its implementation, breaking encapsulation. "OO purity" aside, this would seem to be a maintenance nightmare. The Ruby code gives me no clue what's being returned; I would have to essentially experiment, or read the code to find out what methods the returned object would respond to. Not looking to debate static typing vs duck typing, but looking to understand how you maintain a production system where you have almost no ability to design by contract. Update No one has really addressed the exposure of a method's internal implementation via documentation that this approach requires. Since there are no interfaces, if I'm not expecting a particular type, don't I have to itemize every method I might call so that the caller knows what can be passed in? Or is this just an edge case that doesn't really come up?

    Read the article

  • Calling functions from the Immediate Window during debugging -- advanced Visual Studio kung-fu test

    - by kizzx2
    I see some related questions have been asked, but they're either too advanced for me to grasp or lacking a step-by-step guide from start to finish (most of them end up being insider talk of their own experiment results). OK here it is, given this simple program: #include <stdio.h> #include <string.h> int main() { FILE * f; char buffer[100]; memset(buffer, 0, 100); fun(); f = fopen("main.cpp", "r"); fread(buffer, 1, 99, f); printf(buffer); fclose(f); return 0; } What it does is basically print itself (assume file name is main.cpp). Question How can I have it print another file, say foobar.txt without modifying the source code? It has something to do with running it through VS's, stepping through the functions and hijacking the FILE pointer right before fread() is called. No need to worry about leaking resources by calling fclose(). I tried the simple f = fopen("foobar.txt", "r") which gave CXX0017: Error: symbol "fopen" not found Any ideas?

    Read the article

  • pass object from JS to PHP and back

    - by Radu
    This is something that I don't think can't be done, or can't be done easy. Think of this, You have an button inside a div in HTML, when you click it, you call a php function via AJAX, I would like to send the element that start the click event(or any element as a parameter) to PHP and BACK to JS again, in a way like serialize() in PHP, to be able to restore the element in JS. Let me give you a simple example: PHP: function ajaxCall(element){ return element; } JS: callbackFunction(el){ el.color='red'; } HTML: <div id="id_div"> <input type="button" value="click Me" onClick="ajaxCall(this, callbackFunction);" /> </div> So I thing at 3 methods method 1. I can give each element in the page an ID. so the call to Ajax would look like this: ajaxCall(this.id, callbackFunction); and the callback function would be: document.getElementById(el).color='red'; This method I think is hard, beacause in a big page is hard to keep track of all ID's. method 2. I think that using xPath could be done, If i can get the exact path of an element, and in the callback function evaluate that path to reach the element. This method needs some googling, it is just an ideea. method 3. Modify my AJAX functions, so it retain the element that started the event, and pass it to the callback function as argument when something returns from PHP, so in my AJAX would look like this: eval(callbackFunction(argumentsFromPhp, element)); and the callback function would be: callbackFunction(someArgsFromPhp, el){ el.color='red'; // parse someArgsFromPhp } I think that the third option is my choise to start this experiment. Any of you has a better idea how I can accomplish this ? Thank you.

    Read the article

  • Using a UITableViewController with a small-sized table?

    - by rpj
    When using a UITableViewController, the initWithStyle: method automatically creates the underlying UITableView with - according to the documentation - "the correct dimensions". My problem is that these "correct dimensions" seem 320x460 (the iPhone's screen size), but I'm pushing this TableView/Controller pair into a UINavigationController which is itself contained in a UIView, which itself is about half the height of the screen. No frame or bounds wrangling I can come up with seems to correctly reset the table's size, and as such it's "too long", meaning there are a collection of rows that are pushed off the bottom of the screen and are not visible nor reachable by scrolling. So my question comes down to: what is the proper way to tell a UITableViewController to resize its component UITableView to a specified rectangle? Thanks! Update I've tried all the techniques suggested here to no avail, but I did find one interesting thing: if I eschew the UINavigationController altogether (which I'm not yet willing to do for production, but as an experiment), and add the table view as a direct subview of the enclosing view I mentioned, the frame size given is respected. The very moment I re-introduce the UINavigationController into the mix, no matter if it is added as a subview before or after the table view, and no matter if alloc/init it before or after the table view is added as a subview, the result is the same as it was before. I'm beginning to suspect UINavigationController isn't much of a team player... Update 2 The suggestion to check frame size after the table view on screen was a good one: turns out that the navigation controller is in fact resizing it some time in between load and display. My solution, hacky at best, has been to cache the frame given on load and to reset it if changed at the beginning of tableView:cellForRowAtIndexPath:. Why there you ask? Because it's the one place I found that worked, that's why! I don't consider this a solution as it's obviously improper, but for the benefit of anyone else reading, it does seem to work.

    Read the article

  • Question on boost array initializer

    - by ArunSaha
    I am trying to understand the boost array. The code can be read easily from author's site. In the design rationale, author (Nicolai M. Josuttis) mentioned that the following two types of initialization is possible. boost::array<int,4> a = { { 1, 2, 3 } }; // Line 1 boost::array<int,4> a = { 1, 2, 3 }; // Line 2 In my experiment with g++ (version 4.1.2) Line 1 is working but Line 2 is not. (Line 2 yields the following: warning: missing braces around initializer for 'int [4]' warning: missing initializer for member 'boost::array<int, 4ul>::elems' ) Nevertheless, my main question is, how Line 1 is working? I tried to write a class similar to array.hpp and use statement like Line 1, but that did not work :-(. Can somebody explain me? Is there some boost specific thing happening in Line 1 that I need to be aware of? Thanks in advance. Regards,

    Read the article

  • When i run rake db:create ,Error rake aborted! uninitialized constant Cucumber

    - by Big Bang Theory
    Hi I am trying to experiment on an open source application application . when i run $ rake db:create Following is the stacktrace rake aborted! uninitialized constant Cucumber /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:13 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1882:in `in_namespace' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:910:in `namespace' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:12 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /home/BigBangTheory/Desktop/spot-us/Rakefile:9 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `load' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `raw_load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2017:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2000:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 Any help ?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >