Why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor which, at least to me, seems more natural?
I’m interested in a definitive answer from a primary or secondary source, not mere speculations.
This has been asked before. Unfortunately, the existing answers are merely begging the question. In particular, the following answers don’t satisfy me, as I deem them incorrect:
There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with.
A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor).
So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all.
Just to justify further why I think this is a valid and interesting question:
Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point1.
Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#.
To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd, which made me wonder if there was some technical rationale behind it.
I’m interested in a definitive answer from a primary or secondary source, not mere speculations.
1 Although there is a callback (Startup) which may intercept this.