Search Results

Search found 14080 results on 564 pages for 'known types'.

Page 141/564 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Schliemann's method of programming language learning

    - by DVK
    Background: 19th-century German archeologist Heinrich Schliemann was of course famous for his successful quest to find and excavate the city of Troy (an actual archeological site for the Troy of Homer's Iliad). However, he is just as famous for being an astonishing learner of languages - within the space of two years, he taught himself fluent Dutch, English, French, Spanish, Italian and Portuguese, and later went on to learn seven more, including both modern and ancient Greek. One of the methods he famously used was comparison of a known text, e.g. take a book in a language one is fluent in, take a good translation of a book in a language you wish to learn, and go over them in parallel. (various sources cited the book used by Schliemann to be the Bible, or, as the link above states, a novel). Now, for the actual question. Has anyone used (or heard of) an equivalent of Schliemann's method for learning a new programming language? E.g. instead of basing the leaning on references and tutorials, take a somewhat comprehensive set of programs known to have high-quality code in both languages implementing similar/identical algorithms and learn by comparing them? I'm curious about either personal experiences of applying such an approach, or references to something published, or existance of codebases which could be used for such an approach? What got me thinking about the idea was Project Euler and some code snippets I saw on SO, in C++, Perl and Lisp.

    Read the article

  • Automatically calling OnDetaching() for Silverlight Behaviors

    - by Dan Auclair
    I am using several Blend behaviors and triggers on a silverlight control. I am wondering if there is any mechanism for automatically detaching or ensuring that OnDetaching() is called for a behavior or trigger when the control is no longer being used (i.e. removed from the visual tree). My problem is that there is a managed memory leak with the control because of one of the behaviors. The behavior subscribes to an event on some long-lived object in the OnAttached() override and should be unsubscribing to that event in the OnDetaching() override so that it can become a candidate for garbage collection. However, OnDetaching() never seems to be getting called when I remove the control from the visual tree... the only way I can get it to happen is by explicit detaching the behavior BEFORE removing the control and then it is properly garbage collected. Right now my only solution was to create a public method in the code-behind for the control that can go through and detach any known behaviors that would cause garbage collection problems. It would be up to the client code to know to call this before removing the control from the panel. I don't really like this approach, so I am looking for some automatic way of doing this that I am overlooking or a better suggestion. public void DetachBehaviors() { foreach (var behavior in Interaction.GetBehaviors(this.LayoutRoot)) { behavior.Detach(); } //... //continue detaching all known problematic behaviors.... }

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • Why do properties require explicit typing during compilation?

    - by ctpenrose
    Compilation using property syntax requires the type of the receiver to be known at compile time. I may not understand something, but this seems like a broken or incomplete compiler implementation considering that Objective-C is a dynamic language. The property "comment" is defined with: @property (nonatomic, retain) NSString *comment; and synthesized with: @synthesize comment; "document" is an instance of one of several classes which conform to: @protocol DocumentComment <NSObject> @property (nonatomic, retain) NSString *comment; @end and is simply declared as: id document; When using the following property syntax: stringObject = document.comment; the following error is generated by gcc: error: request for member 'comment' in something not a structure or union However, the following equivalent receiver-method syntax, compiles without warning or error and works fine, as expected, at run-time: stringObject = [document comment]; I don't understand why properties require the type of the receiver to be known at compile time. Is there something I am missing? I simply use the latter syntax to avoid the error in situations where the receiving object has a dynamic type. Properties seem half-baked.

    Read the article

  • resolving overloads in boost.python

    - by swarfrat
    I have a C++ class like this: class ConnectionBase { public: ConnectionBase(); template <class T> Publish(const T&); private: virtual void OnEvent(const Overload_a&) {} virtual void OnEvent(const Overload_b&) {} }; My templates & overloads are a known fixed set of types at compile time. The application code derives from ConnectionBase and overrides OnEvent for the events it cares about. I can do this because the set of types is known. OnEvent is private because the user never calls it, the class creates a thread that calls it as a callback. The C++ code works. I have wrapped this in boost.python, I can import it and publish from python. I want do create the equivalent of the following in python : class ConnectionDerived { public: ConnectionDerived(); private: virtual void OnEvent(const Overload_b&) { // application code } }; But ... since python isn't typed, and all the boost.python examples I've seen dealing with internals are on the C++ side, I'm a little puzzled as to how to do this. How do I override specific overloads?

    Read the article

  • Encoding / Error Correction Challenge

    - by emi1faber
    Is it mathematically feasible to encode and initial 4 byte message into 8 bytes and if one of the 8 bytes is completely dropped and another is wrong to reconstruct the initial 4 byte message? There would be no way to retransmit nor would the location of the dropped byte be known. If one uses Reed Solomon error correction with 4 "parity" bytes tacked on to the end of the 4 "data" bytes, such as DDDDPPPP, and you end up with DDDEPPP (where E is an error) and a parity byte has been dropped, I don't believe there's a way to reconstruct the initial message (although correct me if I am wrong)... What about multiplying (or performing another mathematical operation) the initial 4 byte message by a constant, then utilizing properties of an inverse mathematical operation to determine what byte was dropped. Or, impose some constraints on the structure of the message so every other byte needs to be odd and the others need to be even. Alternatively, instead of bytes, it could also be 4 decimal digits encoded in some fashion into 8 decimal digits where errors could be detected & corrected under the same circumstances mentioned above - no retransmission and the location of the dropped byte is not known. I'm looking for any crazy ideas anyone might have... Any ideas out there?

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

  • rails server fails to start with mysql2 using rvm & ruby 1.9.2-p0 on OSX 10.6.5

    - by Scott
    I'm getting the following error when I start rails server: $ rails server /Users/ssmith/.rvm/gems/ruby-1.9.2-p0/gems/mysql2-0.2.6/lib/mysql2.rb:7:in `require': dlopen(/Users/ssmith/.rvm/gems/ruby-1.9.2-p0/gems/mysql2-0.2.6/lib/mysql2/mysql2.bundle, 9): Library not loaded: libmysqlclient.16.dylib (LoadError) Referenced from: /Users/ssmith/.rvm/gems/ruby-1.9.2-p0/gems/mysql2- 0.2.6/lib/mysql2/mysql2.bundle Reason: image not found - /Users/ssmith/.rvm/gems/ruby-1.9.2-p0/gems/mysql2- 0.2.6/lib/mysql2/mysql2.bundle I've installed mysql2 with the following command after the rvm use ruby-1.9.2-p0 command: $ gem install mysql2 -- --with-mysql-dir=/usr/local/mysql --with-mysql-config=/usr/local/mysql/bin/mysql_config Building native extensions. This could take a while... Successfully installed mysql2-0.2.6 1 gem installed Installing ri documentation for mysql2-0.2.6... Enclosing class/module 'mMysql2' for class Client not known Installing RDoc documentation for mysql2-0.2.6... Enclosing class/module 'mMysql2' for class Client not known I have mysql2 in my Gemfile as well as in the database.yml file and bundle install completes fine $ bundle show mysql2 /Users/ssmith/.rvm/gems/ruby-1.9.2-p0/gems/mysql2-0.2.6 I understand the rails server error is due to it not knowing the mysql_config location on OSX, however on gem install I specified the correct location. Yet RVM's gem is not respecting that mysql_config location it seems. Anyone have a solution to this? Thanks in advance. Scott

    Read the article

  • even PHP has 'bugs' with IE

    - by silversky
    It's not a real bug BUT for sure it is not what you would expect. I have this sample code to upload images: <?php if($type=="image/jpg" || $type=="image/jpeg" || $type=="image/pjpeg" || $type=="image/tiff" || $type=="image/gif" || $type=="image/png") { // make upload else echo "Incorect format ...."; ?> The problem is that that if I modify the extention of an image, let's say to .jpgq or even .jpg% and i try to upload it FF and Chrome will say that the file"s type is "application/octet-stream" and normaly the condition will be false BUT since IE is 'smarter' that other brow. it will say that the file is "image/pjeg and the condition will be true and the file will be uploaded and of course latter any brow. will not be able to read / view the image. It is not a bug because on msdn.microsoft.com it says that: "If the "suggested" (server-provided) MIME type is unknown (not known and not ambiguous), FindMimeFromData immediately returns this MIME type" and "If the server-provided MIME type is either known or ambiguous, the buffer is scanned in an attempt to verify or obtain a MIME type from the actual content." plus others 'inovative solutions from Microsoft'. SO my questions are: Why is IE so 'smart' and when I upload the file to server it knows the real MIME type BUT it will fail to read it from the server ? How can i work around this issue (if the file doesn't have the right extention the condition has to be false)? Is it wise to check the extention format (and not the MIME type)? is any of the above extention not recomended to use ? Should I add others?

    Read the article

  • deploy a sinatra app with passenger gives only 404, page not founds. Yet a simple rack app works.

    - by berkes
    I have correctly (or prbably not) installed passenger on apache 2. Rack works, but sinatra keeps giving 404's. Here is what works: config.ru: #app = proc do |env| return [200, { "Content-Type" => "text/html" }, "hello <b>world</b>"] end run app Here is what works too: Running the app.rb (see below) with ruby app.rb and then looking at localhost:4567/about and / restarting the app, gives me a correct hello world. w00t. But then there is the sinatra entering the building: config.ru require 'rubygems' require 'sinatra' root_dir = File.dirname(__FILE__) set :environment, ENV['RACK_ENV'].to_sym set :root, root_dir set :app_file, File.join(root_dir, 'app.rb') disable :run run Sinatra::Application and an app.rb require 'rubygems' require 'sinatra' get '/' do "Hallo wereld!" end get '/about' do "Hello world, it's #{Time.now} at the server!" end This keeps giving 404s. /var/logs/apache2/error.log lists these correctly as "404" with something that worries me: 83.XXXXXXXXX - - [30/May/2010 16:06:52] "GET /about " 404 18 0.0007 83.XXXXXXXXX - - [30/May/2010 16:06:56] "GET / " 404 18 0.0007 The thing that worried me, is the space after the / and the /about. Would apache or sinatra go looking for /[space], like /%20? If anyone knows what this problem relates to, maybe a known bug (that I could not find) or a known gotcha? Maybe I am just being stupid and getting "it all wrong?" Otherwise any hints on where to get, read or log more developers data on a running rack, sinatra or passenger app would be helpfull too: to see what sinatra is looking for, for example. Some other information: Running ubuntu 9.04, apache2-mm-prefork (deb), mod_php5, ruby 1.8.7, passenger 2.2.11, sinatra 1.0

    Read the article

  • Deserializing a FileStream on Client using WCF

    - by Grandpappy
    I'm very new to WCF, so I apologize in advance if I misstate something. This is using .NET 4.0 RC1. Using WCF, I am trying to deserialize a response from the server. The base response has a Stream as its only MessageBodyMember. public abstract class StreamedResponse { [MessageBodyMember] public Stream Stream { get; set; } public StreamedResponse() { this.Stream = Stream.Null; } } The derived versions of this class are actually what's serialized, but they don't have a MessageBodyMember attribute (they have other base types such as int, string, etc listed as MessageHeader values). [MessageContract] public class ChildResponse : StreamedResponse { [DataMember] [MessageHeader] public Guid ID { get; set; } [DataMember] [MessageHeader] public string FileName { get; set; } [DataMember] [MessageHeader] public long FileSize { get; set; } public ChildResponse() : base() { } } The Stream is always a FileStream, in my specific case (but may not always be). At first, WCF said FileStream was not a known type, so I added it to the list of known types and now it serializes. It also appears, at first glance, to deserialize it on the client's side (it's the FileStream type). The problem is that it doesn't seem to be usable. All the CanRead, CanWrite, etc are false, and the Length, Position, etc properties throw exceptions when being used. Same with ReadByte(). What am I missing that would keep me from getting a valid FileStream?

    Read the article

  • How to add XmlInclude attribute dynamically

    - by Anindya Chatterjee
    I have the following classes [XmlRoot] public class AList { public List<B> ListOfBs {get; set;} } public class B { public string BaseProperty {get; set;} } public class C : B { public string SomeProperty {get; set;} } public class Main { public static void Main(string[] args) { var aList = new AList(); aList.ListOfBs = new List<B>(); var c = new C { BaseProperty = "Base", SomeProperty = "Some" }; aList.ListOfBs.Add(c); var type = typeof (AList); var serializer = new XmlSerializer(type); TextWriter w = new StringWriter(); serializer.Serialize(w, aList); } } Now when I try to run the code I got an InvalidOperationException at last line saying that The type XmlTest.C was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically. I know that adding a [XmlInclude(typeof(C))] attribute with [XmlRoot] would solve the problem. But I want to achieve it dynamically. Because in my project class C is not known prior to loading. Class C is being loaded as a plugin, so it is not possible for me to add XmlInclude attribute there. I tried also with TypeDescriptor.AddAttributes(typeof(AList), new[] { new XmlIncludeAttribute(c.GetType()) }); before var type = typeof (AList); but no use. It is still giving the same exception. Does any one have any idea on how to achieve it?

    Read the article

  • C# 4.0: casting dynamic to static

    - by Kevin Won
    This is an offshoot question that's related to another I asked here. I'm splitting it off because it's really a sub-question: I'm having difficulties casting an object of type dynamic to another (known) static type. I have an ironPython script that is doing this: import clr clr.AddReference("System") from System import * def GetBclUri(): return Uri("http://google.com") note that it's simply newing up a BCL System.Uri type and returning it. So I know the static type of the returned object. now over in C# land, I'm newing up the script hosting stuff and calling this getter to return the Uri object: dynamic uri = scriptEngine.GetBclUri(); System.Uri u = uri as System.Uri; // casts the dynamic to static fine Works no problem. I now can use the strongly typed Uri object as if it was originally instantiated statically. however.... Now I want to define my own C# class that will be newed up in dynamic-land just like I did with the Uri. My simple C# class: namespace Entity { public class TestPy // stupid simple test class of my own { public string DoSomething(string something) { return something; } } } Now in Python, new up an object of this type and return it: sys.path.append(r'C:..path here...') clr.AddReferenceToFile("entity.dll") import Entity.TestPy def GetTest(): return Entity.TestPy(); // the C# class then in C# call the getter: dynamic test = scriptEngine.GetTest(); Entity.TestPy t = test as Entity.TestPy; // t==null!!! here, the cast does not work. Note that the 'test' object (dynamic) is valid--I can call the DoSomething()--it just won't cast to the known static type string s = test.DoSomething("asdf"); // dynamic object works fine so I'm perplexed. the BCL type System.Uri will cast from a dynamic type to the correct static one, but my own type won't. There's obviously something I'm not getting about this...

    Read the article

  • DatagGridViewColumn.DataPropertyName to an array element?

    - by unknown
    Hi there, I'm using a DataGridView binding its datasource to a List, and specifying the properties for each column. An example would be: DataGridViewTextBoxColumn colConcept = new DataGridViewTextBoxColumn(); DataGridViewCell cell4 = new DataGridViewTextBoxCell(); colConcept.CellTemplate = cell4; colConcept.Name = "concept"; colConcept.HeaderText = "Concept"; colConcept.DataPropertyName = "Concept"; colConcept.Width = 200; this.dataGridViewBills.Columns.Add(colConcept); {... assign other colums...} And finally this.dataGridViewBills.DataSource=billslist; //billslist is List<Bill> Obviously Class Bill has a Property called Concept, as well as one Property for each column. Well, now my problem, is that Bill should have and Array/List/whateverdynamicsizecontainer of strings called Years. Let's assume that every Bill will have the same Years.Count, but this only known at runtime.Thus, I can't specify properties like Bill.FirstYear to obtain Bill.Years[0], Bill.SecondYear to obtain Bills.Years[1]... etc... and bind it to each column. The idea, is that now I want to have a grid with dynamic number of colums (known at runtime), and each column filled with a string from the Bill.Years List. I can make a loop to add columns to the grid at runtime depending of Bill.Years.Count, but is possible to bind them to each of the strings that the Bill.Years List contains??? I'm not sure if I'm clear enough. The result ideally would be something like this, for 2 bills on the list, and 3 years for each bill: --------------------------------------GRID HEADER------------------------------- NAME CONCEPT YEAR1 YEAR2 YEAR3 --------------------------------------GRID VALUES------------------------------- Bill1 Bill1.Concept Bill1.Years[0] Bill1.Years[1] Bill1.Years[2] Bill2 Bill2.Concept Bill2.Years[0] Bill2.Years[1] Bill2.Years[2] I can always forget the datasource, and write each cell manually, as the MSFlexGrid used to like, but if possible, I would like to use the binding capabilities of the DataGridView. Any ideas? Thanks a lot.

    Read the article

  • Efficient way of calculating average difference of array elements from array average value

    - by Saysmaster
    Is there a way to calculate the average distance of array elements from array average value, by only "visiting" each array element once? (I search for an algorithm) Example: Array : [ 1 , 5 , 4 , 9 , 6 ] Average : ( 1 + 5 + 4 + 9 + 6 ) / 5 = 5 Distance Array : [|1-5|, |5-5|, |4-5|, |9-5|, |6-5|] = [4 , 0 , 1 , 4 , 1 ] Average Distance : ( 4 + 0 + 1 + 4 + 1 ) / 5 = 2 The simple algorithm needs 2 passes. 1st pass) Reads and accumulates values, then divides the result by array length to calculate average value of array elements. 2nd pass) Reads values, accumulates each one's distance from the previously calculated average value, and then divides the result by array length to find the average distance of the elements from the average value of the array. The two passes are identical. It is the classic algorithm of calculating the average of a set of values. The first one takes as input the elements of the array, the second one the distances of each element from the array's average value. Calculating the average can be modified to not accumulate the values, but caclulating the average "on the fly" as we sequentialy read the elements from the array. The formula is: Compute Running Average of Array's elements ------------------------------------------- RA[i] = E[i] {for i == 1} RA[i] = RA[i-1] - RA[i-1]/i + A[i]/i { for i > 1 } Where A[x] is the array's element at position x, RA[x] is the average of the array's elements between position 1 and x (running average). My question is: Is there a similar algorithm, to calculate "on the fly" (as we read the array's elements), the average distance of the elements from the array's mean value? The problem is that, as we read the array's elements, the final average value of the array is not known. Only the running average is known. So calculating differences from the running average will not yield the correct result. I suppose, if such algorithm exists, it probably should have the "ability" to compensate, in a way, on each new element read for the error calculated as far.

    Read the article

  • How is a relative JMP (x86) implemented in an Assembler?

    - by Pindatjuh
    While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction: enc inst size in bytes EB cb JMP rel8 2 E9 cw JMP rel16 4 (because of 0x66 16-bit prefix) E9 cd JMP rel32 5 ... (from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147) All are relative jumps, where the size of each encoding (operation + operand) is in the third column. Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops. Now my problem is with the following situation: b: XXX c: JMP a e: XXX ... XXX d: JMP b a: XXX (where XXX is any instruction, depending on the to-be assembled program) The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling). I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a. How do existing assemblers implement this, or how would you implement this? This is what I am thinking which solves the problem: First encode all the instructions to opcodes between the JMP and it's target, and if this region contains a variable-sized opcode, use the maximum size, i.e. 5 for JMP. Then in some conditions, the JMP is oversized (because it may fit in a smaller encoding): so another pass will search for oversized JMPs, shrink them, and move all instructions ahead), and set absolute branching instructions (i.e. external CALLs) after this pass is completed. I wonder, perhaps this is an over-engineered solution, that's why I ask this question.

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • howto distinguish composition and self-typing use-cases

    - by ayvango
    Scala has two instruments for expressing object composition: original self-type concept and well known trivial composition. I'm curios what situations I should use which in. There are obvious differences in their applicability. Self-type requires you to use traits. Object composition allows you to change extensions on run-time with var declaration. Leaving technical details behind I can figure two indicators to help with classification of use cases. If some object used as combinator for a complex structure such as tree or just have several similar typed parts (1 car to 4 wheels relation) than it should use composition. There is extreme opposite use case. Lets assume one trait become too big to clearly observe it and it got split. It is quite natural that you should use self-types for this case. That rules are not absolute. You may do extra work to convert code between this techniques. e.g. you may replace 4 wheels composition with self-typing over Product4. You may use Cake[T <: MyType] {part : MyType} instead of Cake { this : MyType => } for cake pattern dependencies. But both cases seem counterintuitive and give you extra work. There are plenty of boundary use cases although. One-to-one relations is very hard to decide with. Is there any simple rule to decide what kind of technique is preferable? self-type makes you classes abstract, composition makes your code verbose. self-type gives your problems with blending namespaces and also gives you extra typing for free (you got not just a cocktail of two elements but gasoline-motor oil cocktail known as a petrol bomb). How can I choose between them? What hints are there? Update: Let us discuss the following example: Adapter pattern. What benefits it has with both selt-typing and composition approaches?

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Optimal two variable linear regression calculation

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT, FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < 15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Question The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Related Sites Least absolute deviations Robust regression Thank you!

    Read the article

  • Dynamic Method Creation

    - by TJMonk15
    So, I have been trying to research this all morning, and have had no luck. I am trying to find a way to dynamically create a method/delegate/lambda that returns a new instance of a certain class (not known until runtime) that inherits from a certain base class. I can guarantee the following about the unknown/dynamic class It will always inherit from one known Class (Row) It will have atleast 2 constructors (one accepting a long, and one accepting an IDataRecord) I plan on doign the following: Finding all classes that have a certain attribute on them Creating a delegate/method/lambda/whatever that creates a new instance of the class Storing the delegate/whatever along with some properties in a struct/class Insert the struct into a hashtable When needed, pull the info out of the hashtable and calling the delegate/whatever to get a new instance of the class and returning it/adding it to a list/etc. I need help only with #2 above!!! I have no idea where to start. I really just need some reference material to get me started, or some keywords to throw into google. This is for a compact/simple to use ORM for our office here. I understand the above is not simple, but once working, should make maintaining the code incredibly simple. Please let me know if you need any more info! And thanks in advance! :)

    Read the article

  • How does a client find the port number of a server?

    - by Jonathan
    Hello all, I am currently learning about basic networking in java. I have been playing around with the server and client relationship between two of my computers. However, I cannot figure out how distributed programs (say, a videogame), can not only find the 'host' computer, but also the port number on which the server is running in order to create a Socket between the two computers. The only way I really see to create a Socket is with an already known IP Address, and with a known port number. How do you search a LAN network for another computer (host) searching for clients? How do you determine what port the server is located on without 'pinging' all available ports for a response (which, I understand, is bad form... Something about 'server attack'...)? In such a situation as a video game, there can be any number of computers on the same network, and any number of them might be attempting to host, or otherwise running the application. Any other important information, or perhaps reference to a more detailed tutorial than the one I am using, regarding making connections on so very little information on the client side would be appreciated. Many thanks, Jonathan

    Read the article

  • A commercial software but open and free for personal/edu. How to license?

    - by Ivan
    I am developing a software to sell for business use but am willing to make it free and open-source for personal and educational use. Actually I can see the flowing requirements I would like the license to set: Personal and educational usage of the program and its source codes is to be free. In case of publishing of derivative works the original work and author (me) must be mentioned (incl. textual link to my website in a not-very-far-hidden place) and the derivative work must have different name. A derivative work can be closed-source. In every case of commercial (when the end-user is a commercial body (as a company (expect of non-profit organizations), an individual entrepreneur or government office)) usage of my work or any of derivative works made by anyone, the end-user, service provider or the derivative author must buy a commercial license from me. I mean no guarantees or responsibilities, whether expressed or implied... (except the case when one explicitly purchases a support service contract from me and the particular contract specifies a responsibility). Is there a known common license for this case? As far as I can see now it can not be OSI-approved as it does not comply to the §6. of OSI definition of open source. But there still can be an a common known reusable license for this case as it looks quite natural, I think.

    Read the article

  • SSL signed certificates for internal use

    - by rogueprocess
    I have a distributed application consisting of many components that communicate over TCP (for examle JMS) and HTTP. All components run on internal hardware, with internal IP addresses, and are not accessible to the public. I want to make the communication secure using SSL. Does it make sense to purchase signed certificates from a well-known certificate authority? Or should I just use self-signed certs? My understanding of the advantage of trusted certs is that the authority is an entity that can be trusted by the general public - but that is only an issue when the general public needs to be sure that the entity at a particular domain is who they say they are. Therefore, in my case, where the same organization is responsible for the components at both ends of the communication, and everything in between, a publicly trusted authority would be pointless. In other words, if I generate and sign a certificate for my own server, I know that it's trustworthy. And no one from outside the organization will ever be asked to trust this certificate. That is my reasoning - am I correct, or is there some potential advantage to using certs from a known authority?

    Read the article

  • How do I display a view as if it's the front page via a module?

    - by Justin
    I have a simple view that feeds a home page. I have a custom module that registers some specific URLs in hook_menu that I pass into my module so I can pass them as arguments into the view. I can get the module to display the view all right, but it doesn't use the teaser/is_front view that outputs when I access the home page. I looked through the APIs but I can't seem to figure out how I can output the view via my module as if it's the front page, meaning $is_front is true and the teasers would appear. The reason I'm not passing in the arguments via the URL bar into the view itself is: My argument list is known and finite The argument order is mixed, meaning I will sometimes have /argument1, /argument1/argument2 or just /argument2. I only want to capture the first level URL as an argument for specific, known strings (e.g. I don't want to pass /admin into my view but I do want to pass in /los-angeles, which I register in the menu system via hook_menu in my module) Here are some examples to make this more clear: /admin - loads the admin page /user - loads the login page /boston - passes into the first argument of the view; shows in front/teaser mode / - shows view with no arguments /bread - passes into argument 2 of the view; shows in front/teaser mode /boston/bread - Passes into argument 1 and 2 of the view; shows in front/teaser mode Maybe I'm going about this the wrong way? Or perhaps there is a way to have a module load a view and somehow set front/teaser mode? Details: Drupal 6, PHP 5, MySQL 5, Views, CCK

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >