Search Results

Search found 16218 results on 649 pages for 'compiler errors'.

Page 387/649 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • What are some reasonable stylistic limits on type inference?

    - by Jon Purdy
    C++0x adds pretty darn comprehensive type inference support. I'm sorely tempted to use it everywhere possible to avoid undue repetition, but I'm wondering if removing explicit type information all over the place is such a good idea. Consider this rather contrived example: Foo.h: #include <set> class Foo { private: static std::set<Foo*> instances; public: Foo(); ~Foo(); // What does it return? Who cares! Just forward it! static decltype(instances.begin()) begin() { return instances.begin(); } static decltype(instances.end()) end() { return instances.end(); } }; Foo.cpp: #include <Foo.h> #include <Bar.h> // The type need only be specified in one location! // But I do have to open the header to find out what it actually is. decltype(Foo::instances) Foo::instances; Foo() { // What is the type of x? auto x = Bar::get_something(); // What does do_something() return? auto y = x.do_something(*this); // Well, it's convertible to bool somehow... if (!y) throw "a constant, old school"; instances.insert(this); } ~Foo() { instances.erase(this); } Would you say this is reasonable, or is it completely ridiculous? After all, especially if you're used to developing in a dynamic language, you don't really need to care all that much about the types of things, and can trust that the compiler will catch any egregious abuses of the type system. But for those of you that rely on editor support for method signatures, you're out of luck, so using this style in a library interface is probably really bad practice. I find that writing things with all possible types implicit actually makes my code a lot easier for me to follow, because it removes nearly all of the usual clutter of C++. Your mileage may, of course, vary, and that's what I'm interested in hearing about. What are the specific advantages and disadvantages to radical use of type inference?

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Monitoring C++ applications

    - by Scott A
    We're implementing a new centralized monitoring solution (Zenoss). Incorporating servers, networking, and Java programs is straightforward with SNMP and JMX. The question, however, is what are the best practices for monitoring and managing custom C++ applications in large, heterogenous (Solaris x86, RHEL Linux, Windows) environments? Possibilities I see are: Net SNMP Advantages single, central daemon on each server well-known standard easy integration into monitoring solutions we run Net SNMP daemons on our servers already Disadvantages: complex implementation (MIBs, Net SNMP library) new technology to introduce for the C++ developers rsyslog Advantages single, central daemon on each server well-known standard unknown integration into monitoring solutions (I know they can do alerts based on text, but how well would it work for sending telemetry like memory usage, queue depths, thread capacity, etc) simple implementation Disadvantages: possible integration issues somewhat new technology for C++ developers possible porting issues if we switch monitoring vendors probably involves coming up with an ad-hoc communication protocol (or using RFC5424 structured data; I don't know if Zenoss supports that without custom Zenpack coding) Embedded JMX (embed a JVM and use JNI) Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions somewhat simple implementation (we already do this today for other purposes) Disadvantages: complexity (JNI, thunking layer between native C++ and Java, basically writing the management code twice) possible stability problems requires a JVM in each process, using considerably more memory JMX is new technology for C++ developers each process has it's own JMX port (we run a lot of processes on each machine) Local JMX daemon, processes connect to it Advantages single, central daemon on each server consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions Disadvantages: complexity (basically writing the management code twice) need to find or write such a daemon need a protocol between the JMX daemon and the C++ process JMX is new technology for C++ developers CodeMesh JunC++ion Advantages consistent management interface for both Java and C++ well-known standard easy integration into monitoring solutions single, central daemon on each server when run in shared JVM mode somewhat simple implementation (requires code generation) Disadvantages: complexity (code generation, requires a GUI and several rounds of tweaking to produce the proxied code) possible JNI stability problems requires a JVM in each process, using considerably more memory (in embedded mode) Does not support Solaris x86 (deal breaker) Even if it did support Solaris x86, there are possible compiler compatibility issues (we use an odd combination of STLPort and Forte on Solaris each process has it's own JMX port when run in embedded mode (we run a lot of processes on each machine) possibly precludes a shared JMX server for non-C++ processes (?) Is there some reasonably standardized, simple solution I'm missing? Given no other reasonable solutions, which of these solutions is typically used for custom C++ programs? My gut feel is that Net SNMP is how people do this, but I'd like other's input and experience before I make a decision.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Keep coding the wrong way to remain consistent? [closed]

    - by bwalk2895
    Possible Duplicate: Code maintenance: keeping a bad pattern when extending new code for being consistent, or not? To keep things simple let's say I am responsible for maintaining two applications, AwesomeApp and BadApp (I am responsible for more and no that is not their actual names). AwesomeApp is a greenfield project I have been working on with other members on my team. It was coded using all the fancy buzzwords, Multilayer, SOA, SOLID, TDD, and so on. It represents the direction we want to go as a team. BadApp is a application that has been around for a long time. The architecture suffers from many sins, namely everything is tightly coupled together and it is not uncommon to get a circular dependency error from the compiler, it is almost impossible to unit test, large classes, duplicate code, and so on. We have a plan to rewrite the application following the standards established by AwesomeApp, but that won't happen for a while. I have to go into BadApp and fix a bug, but after spending months coding what I consider correctly, I really don't want do continue perpetuate bad coding practices. However, the way AwesomeApp is coded is vastly different from the way BadApp is coded. I fear implementing the "correct" way would cause confusion for other developers who have to maintain the application. Question: Is it better to keep coding the wrong way to remain consistent with the rest of the code in the application (knowing it will be replaced) or is it better to code the right way with an understanding it could cause confusion because it is so much different? To give you an example. There is a large class (1000+ lines) with several functions. One of the functions is to calculate a date based on an enumerated value. Currently the function handles all the various calculations. The function relies on no other functionality within the class. It is self contained. I want to break the function into smaller functions (at the very least) and put them into their own classes and hide those classes behind an interface (at the most) and use the factory pattern to instantiate the date classes. If I just broke it out into smaller functions within the class it would follow the existing coding standard. The extra steps are to start following some of the SOLID principles.

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • "Whole-team" C++ features?

    - by Blaisorblade
    In C++, features like exceptions impact your whole program: you can either disable them in your whole program, or you need to deal with them throughout your code. As a famous article on C++ Report puts it: Counter-intuitively, the hard part of coding exceptions is not the explicit throws and catches. The really hard part of using exceptions is to write all the intervening code in such a way that an arbitrary exception can propagate from its throw site to its handler, arriving safely and without damaging other parts of the program along the way. Since even new throws exceptions, every function needs to provide basic exception safety — unless it only calls functions which guarantee throwing no exception — unless you disable exceptions altogether in your whole project. Hence, exceptions are a "whole-program" or "whole-team" feature, since they must be understood by everybody in a team using them. But not all C++ features are like that, as far as I know. A possible example is that if I don't get templates but I do not use them, I will still be able to write correct C++ — or will I not?. I can even call sort on an array of integers and enjoy its amazing speed advantage wrt. C's qsort (because no function pointer is called), without risking bugs — or not? It seems templates are not "whole-team". Are there other C++ features which impact code not directly using them, and are hence "whole-team"? I am especially interested in features not present in C. Update: I'm especially looking for features where there's no language-enforced sign you need to be aware of them. The first answer I got mentioned const-correctness, which is also whole-team, hence everybody needs to learn about it; however, AFAICS it will impact you only if you call a function which is marked const, and the compiler will prevent you from calling it on non-const objects, so you get something to google for. With exceptions, you don't even get that; moreover, they're always used as soon as you use new, hence exceptions are more "insidious". Since I can't phrase this as objectively, though, I will appreciate any whole-team feature. Appendix: Why this question is objective (if you wonder) C++ is a complex language, so many projects or coding guides try to select "simple" C++ features, and many people try to include or exclude some ones according to mostly subjective criteria. Questions about that get rightfully closed regularly here on SO. Above, instead, I defined (as precisely as possible) what a "whole-team" language feature is, provide an example (exceptions), together with extensive supporting evidence in the literature about C++, and ask for whole-team features in C++ beyond exceptions. Whether you should use "whole-team" features, or whether that's a relevant concept, might be subjective — but that only means the importance of this question is subjective, like always.

    Read the article

  • Best Practice Method for Including Images in a DataGrid using MVVM

    - by Killercam
    All, I have a WPF DataGrid. This DataGrid shows files ready for compilation and should also show the progress of my compiler as it compiles the files. The format of the DataGrid is Image|File Path|State -----|---------|----- * |C:\AA\BB |Compiled & |F:PP\QQ |Failed > |G:HH\LL |Processing .... The problem is the image column (the *, &, and are for representation only). I have a ResourceDictionary that contains hundreds of vector images as Canvas objects: <ResourceDictionary xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <Canvas x:Key="appbar_acorn" Width="48" Height="48" Clip="F1 M 0,0L 48,0L 48,48L 0,48L 0,0"> <Path Width="22.3248" Height="25.8518" Canvas.Left="13.6757" Canvas.Top="11.4012" Stretch="Fill" Fill="{DynamicResource BlackBrush}" Data="F1 M 16.6309,18.6563C 17.1309,8.15625 29.8809,14.1563 29.8809,14.1563C 30.8809,11.1563 34.1308,11.4063 34.1308,11.4063C 33.5,12 34.6309,13.1563 34.6309,13.1563C 32.1309,13.1562 31.1309,14.9062 31.1309,14.9062C 41.1309,23.9062 32.6309,27.9063 32.6309,27.9062C 24.6309,24.9063 21.1309,22.1562 16.6309,18.6563 Z M 16.6309,19.9063C 21.6309,24.1563 25.1309,26.1562 31.6309,28.6562C 31.6309,28.6562 26.3809,39.1562 18.3809,36.1563C 18.3809,36.1563 18,38 16.3809,36.9063C 15,36 16.3809,34.9063 16.3809,34.9063C 16.3809,34.9063 10.1309,30.9062 16.6309,19.9063 Z "/> </Canvas> </ResourceDictionary> Now, I want to be able to include these in my image column and change them at run-time. I was going to attempt to do this by setting up a property in my View Model that was of type Image and binding this to my View via: <DataGrid.Columns> <DataGridTemplateColumn Header="" Width="SizeToCells" IsReadOnly="True"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding Canvas}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid.Columns> Where in the View Model I have the appropriate property. Now, I was told this is not 'pure' MVVM. I don't fully accept this, but I want to know if there is a better way of doing this. Say, binding to an enum and using a converter to get the image? Any advice would be appreciated.

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • Do functional generics exist and what is the correct name for them if they do?

    - by voroninp
    Consider the following generic class: public class EntityChangeInfo<EntityType,TEntityKey> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Here EntityType unambiguously defines TEntityKeyType. So it would be nice to have some kind of types' map: public class EntityChangeInfo<EntityType,TEntityKey> with map < [ EntityType : Person -> TEntityKeyType : int] [ EntityType : Car -> TEntityKeyType : CarIdType ]> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Another one example is: public class Foo<TIn> with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} } The reasonable question: how can this be interpreted by the compiler? Well, for me it is just the shortcut for two structurally similar classes: public sealed class Foo<Person> { string Prop1 {get;set;} int Prop2 {get;set;} ... double PropN {get;set;} } public sealed class Foo<Car> { int Prop1 {get;set;} int Prop2 {get;set;} ... Price PropN {get;set;} } But besides this we could imaging some update of the Foo<>: public class Foo<TIn> with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} public override string ToString() { return string.Format("prop1={0}, prop2={1},...propN={N-1}, Prop1, Prop2,...,PropN); } } This all can seem quite superficial but the idea came when I was designing the messages for our system. The very first class. Many messages with the same structure should be discriminated by the EntityType. So the question is whether such construct exists in any programming language?

    Read the article

  • Do functional generics exist or what is the correct name for them if they do?

    - by voroninp
    Consider the following generic class public class EntityChangeInfo<EntityType,TEntityKey> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Here EntityType unambiguously defines TEntityKeyType. So it would be nice to have some kind of types' map public class EntityChangeInfo<EntityType,TEntityKey> with map < [ EntityType : Person -> TEntityKeyType : int] [ EntityType : Car -> TEntityKeyType : CarIdType ]> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Another one example is: public class Foo<TIn> with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} } The reasonable question how this can be interpreted by the compiler? Well, for me it is just the sortcut for two structurally similar classes: public sealed class Foo<Person> { string Prop1 {get;set;} int Prop2 {get;set;} ... double PropN {get;set;} } public sealed class Foo<Car> { int Prop1 {get;set;} int Prop2 {get;set;} ... Price PropN {get;set;} } But besides this we could imaging some update of the Foo<: public class Foo<TIn> with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} public override string ToString() { return string.Format("prop1={0}, prop2={1},...propN={N-1}, Prop1, Prop2,...,PropN); } } This all can seem quite superficial but the idea came when I was designing the messages for our system. The very first class. Many messages with the same structrue should be discriminated by the EntityType. So the question is whether such construct exist in any programming language?

    Read the article

  • Fix ACPI DSDT of Amilo Pa 1538

    - by kayahr
    I have a Fujitsu-Siemens Amilo Pa 1538 laptops and installed Ubuntu 10.04 on it (Kernel 2.6.32). Main problem is that the fans of the notebook are always on even when the temperature is low. Another problem is that the display brightness can not be adjusted. Since some years I use Dell laptops and never experienced any ACPI problems so I thought that it is no longer needed to fix ACPI tables but now I have this crap of a laptop and I think I have to do it. Unfortunately I need some help repairing the DSDT. The dsdt.dat and dsdt.dsl of the laptop can be found here: http://www.ailis.de/~k/permdata/20100420/dsdt/ Compiling the DSDT gives the following output: # iasl -sa dsdt.dsl Intel ACPI Component Architecture ASL Optimizing Compiler version 20090521 [Jun 30 2009] Copyright (C) 2000 - 2009 Intel Corporation Supports ACPI Specification Revision 3.0a dsdt.dsl 81: Method (\_WAK, 1, NotSerialized) Warning 1080 - ^ Reserved method must return a value (_WAK) dsdt.dsl 207: Method (_L10, 0, NotSerialized) Warning 1087 - ^ Not all control paths return a value (_L10) dsdt.dsl 2861: Method (NVIF, 3, NotSerialized) Warning 1087 - ^ Not all control paths return a value (NVIF) dsdt.dsl 4551: Store (\_SB.PCI0.LPC0.PMRD (0xFA), Local0) Warning 1099 - Statement is unreachable ^ ASL Input: dsdt.dsl - 4962 lines, 162828 bytes, 2300 keywords AML Output: dsdt.aml - 17627 bytes, 591 named objects, 1709 executable opcodes Compilation complete. 0 Errors, 4 Warnings, 0 Remarks, 519 Optimizations Anyone here with DSDT experience who can help me fixing the DSDT?

    Read the article

  • playsms sent sms to queue

    - by user512213
    i install playsms in my system as below mentioned way https://github.com/antonraharja/playSMS/wiki/Web-User-Interface-Installation i use kannel as a gateway and kannel running fine i get the following status while i check kannel status in browser. Kannel bearerbox version `1.4.3'. Build `Nov 24 2011 08:02:18', compiler `4.6.2'. System Linux, release 3.2.0-23-generic-pae, version #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012, machine i686. Hostname MSSRF-INCOIS, IP 127.0.1.1. Libxml version 2.7.8. Using OpenSSL 1.0.0e 6 Sep 2011. Compiled with MySQL 5.5.17, using MySQL 5.5.24. Using SQLite 3.7.9. Using native malloc. Status: running, uptime 0d 1h 8m 27s WDP: received 0 (0 queued), sent 0 (0 queued) SMS: received 0 (0 queued), sent 0 (0 queued), store size -1 SMS: inbound (0.00,0.00,0.00) msg/sec, outbound (0.00,0.00,0.00) msg/sec DLR: 0 queued, using internal storage Box connections: wapbox, IP 127.0.0.1 (on-line 0d 1h 8m 2s) wapbox, IP 127.0.0.1 (on-line 0d 0h 18m 11s) SMSC connections: unknown AT2[/dev/ttyS0] (online 4106s, rcvd 0, sent 0, failed 0, queued 0 msgs) But when i try to send sms in playsms means it shows "Your SMS has been delivered to queue " but sms not received .Any thing we missed out while configuration.any one advice me to proceed.

    Read the article

  • OpenSSL support for Ruby: "Cipher is not a module (TypeError)"

    - by smotchkkiss
    The Problem Our systems admin needed to upgrade the packages on our CentOS 5.4 dev server to match the packages on our production server. The upgrade affected ruby and/or openssl. We run a Ruby on Rails issue tracking system called Redmine that is deployed with Passenger on Apache. Everything worked before the server update, but when trying to access the ticket system now, I get the following error: Error message: Cipher is not a module Exception class: TypeError Application root: /home/dev/rails/redmine-0.8.7 I've been trying so hard to fix this problem but I can't seem to beat it. I have tried following this guide: http://iamclovin.posterous.com/how-to-solve-the-cipher-is-not-a-module-error When I try require 'openssl' in IRB, I do see a true return value. However, I'm still seeing the Cipher.rb is not a module TypeError when accessing the ticket system. Possibly (probably) related: I've tried updating Passenger, but when I try passenger-install-apache2-module I see: Checking for required software... * GNU C++ compiler... found at /usr/bin/g++ * Ruby development headers... found * OpenSSL support for Ruby... /usr/lib/ruby/1.8/openssl/cipher.rb:22: Cipher is not a module (TypeError) Any help?

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • GNU screen, how to get current sessionname programmably

    - by Jimm Chen
    [ This can be considered step 2 of my previous question Is it possible to change GNU screen session name after created? ] Actually, I'd like to write a script that can display current screen session name and change current session name. For example: sren armcross It will change the session name to armcross (ARM gcc cross compiler) and output something like: screen session name changed from '25278.pts-15.linux-ic37' to 'armcross' So, the key question now is how to get current session name. Not only for display the old session name, but according to Is it possible to change GNU screen session name after created? , I have to know it(pass to -d -r) before I can change it to something else. Can we use $STY for current session name? No. $STY will not change after you have changed the session name to a user-defined one. However, for command screen -d -r <oldsessname> -X sessionname armcross should be the user-defined name(if ever defined) instead of $STY, otherwise, screen spouts error "No screen session found." Maybe, there is a verbose way, use screen -list to list all sessions(user-defined name listed), then, match the pid part from $STY against those listed sessions and we will find current session's user-defined name. It should not be so verbose for such a straightforward question. Don't you think so? The -d -D and -r -R options seems to expose too much implementation detail to screen's user. It seems, to rename a session, you have to detach it, then do the rename, then reattach it. Right? My env: opensuse 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06 Thank you.

    Read the article

  • QT Creator 64-bit Snow Leopard

    - by quadelirus
    I have a bunch of libraries that I need to link against that I installed via macports. They are 64-bit libraries. I'm working on an application written with QT Creator and the .pro is set up. I downloaded the QT SDK for Mac OS X, but it is 32-bit and so the compiled code won't link against the 64-bit binaries that I got from macports. Ok. So I downloaded the QT SDK source and built from source using -arch x86_64. Now I have a 64-bit version of the SDK (I think) but it didn't build a QT Creator app. So. I need to know one of 4 things: Either, 1.) I'm guessing that a simple make command will convince the QT SDK to build the creator for me. If this is true, then what is the command (make creator?). barring that, I need to know 2.) The easiest way to get MacPorts to redownload the libraries that I installed with a 32-bit version (I keep seeing a "+universal" mentioned, but I haven't seen it on a line, and simply calling ports +universal install XYZ doesn't seem to work--perhaps I need to uninstall and reinstall the package?). Also, is this a stupid idea? or 3.) Someone who actually has a prebuilt 64-bit QT SDK installer so I don't have to mess with this. It is ridiculous that QT doesn't already have this available, in my opinion--SL has been out since, what, last August? 4--and this would take the cake.) I don't understand why I can't simply put a "compile-for-64-bit stupid" command directly into the QT pro file and have it build. There isn't really a reason why a compiler compiled in 32-bits couldn't compile to 64-bits is there? Thanks.

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • Problems with freetype on OSX 10.7.4

    - by eythor
    I'm trying to install mplayer with OSD using homebrew. I've added both --enable-menu and --with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config to the brew recipe. ==> Downloading http://www.mplayerhq.hu/MPlayer/releases/MPlayer-1.1.tar.xz Already downloaded: /Library/Caches/Homebrew/mplayer-1.1.tar.xz xz -dc "/Library/Caches/Homebrew/mplayer-1.1.tar.xz" | /usr/bin/tar xf - ==> ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 -- with-freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config ./configure --prefix=/usr/local/Cellar/mplayer/1.1 --cc=cc --host-cc=cc --disable-cdparanoia --disable-libopenjpeg --enable-menu --disable-x11 --with -freetype-config=/usr/local/Cellar/freetype/2.4.10/freetype-config Checking for cc version ... clang 4.2.1 (experimental support only) Checking for working compiler ... yes Detected operating system: Darwin Detected host architecture: x86_64 Checking for cross compilation ... no Checking for host cc ... cc Checking for CPU vendor ... GenuineIntel (6:15:10) Checking for CPU type ... Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz For freetype-config I've tried three seperate paths; /usr/X11R6/bin/freetype-config, /usr/X11/bin/freetype-config and the one in Cellar. Checking for freetype always fails: Checking for freetype >= 2.0.9 ... no Checking for fontconfig ... no (FreeType support needed) Although freetype itself seems to be installed. mufasa:bin eythor$ freetype-config --version 15.0.9 mufasa:bin eythor$ freetype-config --ftversion 2.4.10 mufasa:bin eythor$ freetype-config --libs -L/usr/local/Cellar/freetype/2.4.10/lib -lfreetype -lz -lbz2 mufasa:bin eythor$ freetype-config --cflags -I/usr/local/Cellar/freetype/2.4.10/include/freetype2 - I/usr/local/Cellar/freetype/2.4.10/include I'm not sure what to try next or how to figure out why freetype isn't recognized. Can anyone point me in a sensible direction?

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • Problem with wireless networking

    - by Rodnower
    Hello, I have atheros wifi hardware, intell chipset, gigabyte laptop and CentOS 5 installed. Now I try to use wireless network and get problems. First of all I want to say that I have 2 OS on my laptop, and when I load Windows XP I still may to access to the wireless network. First I try to get it on Linux was to make active wlan0 interface in: system - administration - network but I get: Determining IP information for wlan0... failed. Second I try also was unsuccessfully: [root 1 network-scripts]# ifup-wireless Error : unrecognised wireless request "off" This relevant output of iwconfig is: Warning: Driver for device wlan0 recommend version 21 of Wireless Extension, but has been compiled with version 20, therefore some driver features may not be available... wlan0 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.462 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 {output not in the original format} The same things are happen even if I do: modprobe wlan0 (this not get error) Important to say that modprobe not succeed to find ath_pci, tharefor I decide to download latest version of the madwifi driver from http://madwifi-project.org. I extracted this, but when I make this, this is what I get: [root 1 madwifi-0.9.4]# make /bin/sh: line 0: cd: /lib/modules/2.6.18-164.el5/build: No such file or directory Makefile.inc:66: * /lib/modules/2.6.18-164.el5/build is missing, please set KERNELPATH. Stop. I tried to set KERNELPATH, but I think that it was incorrect: [root 1 madwifi-0.9.4]# make KERNELPATH=/lib/modules/2.6.18-164.el5/kernel/ /bin/sh: cc: command not found Makefile.inc:81: * Cannot detect kernel version - please check compiler and KERNELPATH. Stop. Some one have any ideas? Thank you very much for ahead.

    Read the article

  • glib2 64-bit compile fails on Solaris 10

    - by Aaron
    I'm encountering a problem building glib-2.26.1 on a Solaris 10 box - 64-bit. Goo diligence doesn't turn anything up, but no matter what I do the build fails in the same way. I've tried using the Sun Studio compiler, gcc (SFW) to no avail. When I compile I get the following error: [root@foo glib-2.26.1]$ export CC=/opt/solstudio12.2/bin/cc [root@foo glib-2.26.1]$ export CFLAGS="-m64" ...configure goes normally... [root@foo glib-2.26.1]$ make ...snip... source='gatomic.c' object='gatomic.lo' libtool=yes \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ /bin/bash ../libtool --tag=CC --mode=compile /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c -o gatomic.lo gatomic.c libtool: compile: /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c gatomic.c -KPIC -DPIC -o .libs/gatomic.o "gatomic.c", line 885: warning: no explicit type given "gatomic.c", line 885: syntax error before or at: * "gatomic.c", line 885: warning: old-style declaration or incorrect type for: g_atomic_mutex "gatomic.c", line 906: warning: implicit function declaration: g_mutex_lock "gatomic.c", line 909: warning: implicit function declaration: g_mutex_unlock "gatomic.c", line 1155: warning: implicit function declaration: g_mutex_new "gatomic.c", line 1155: warning: improper pointer/integer combination: op "=" cc: acomp failed for gatomic.c make[4]: *** [gatomic.lo] Error 1 make[4]: Leaving directory `/root/glib-2.26.1/glib' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/root/glib-2.26.1/glib' make[2]: *** [all] Error 2 make[2]: Leaving directory `/root/glib-2.26.1/glib' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/glib-2.26.1' make: *** [all] Error 2 Does anyone know where the build might be going wrong? Not sure where else to look here. Thanks.

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >