Search Results

Search found 4999 results on 200 pages for 'derived instances'.

Page 52/200 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • ArcGIS–Getting the Legend Labels out

    - by Avner Kashtan
    Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement. Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map. This is how the legend looks on ESRI’s own map-editing tools:   but this is how it looks when I used the Legend control, supplied out of the box by ESRI:   Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology. Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <TextBlock Text="{Binding Layer.ID}"/> 5: </DataTemplate> 6: </esri:Legend.MapLayerTemplate> 7: </esri:Legend> but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service? As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it: 1: public string LegendFields 2: { 3: get 4: { 5: if (this.Renderer is UniqueValueRenderer) 6: { 7: return (this.Renderer as UniqueValueRenderer).Field; 8: } 9: else if (this.Renderer is UniqueValueMultipleFieldsRenderer) 10: { 11: var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer; 12: return string.Join(renderer.FieldDelimiter, renderer.Fields); 13: } 14: else return null; 15: } For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <StackPanel> 5: <TextBlock Text="{Binding Layer.ID}"/> 6: <TextBlock Text="{Binding Layer.LegendFields}" Margin="10,0,0,0" TextStyle="Italic"/> 7: </StackPanel> 8: </DataTemplate> 9: </esri:Legend.MapLayerTemplate> 10: </esri:Legend> and get the look I wanted – the list of fields below the layer name, indented.

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Differences Between NHibernate and Entity Framework

    - by Ricardo Peres
    Introduction NHibernate and Entity Framework are two of the most popular O/RM frameworks on the .NET world. Although they share some functionality, there are some aspects on which they are quite different. This post will describe this differences and will hopefully help you get started with the one you know less. Mind you, this is a personal selection of features to compare, it is by no way an exhaustive list. History First, a bit of history. NHibernate is an open-source project that was first ported from Java’s venerable Hibernate framework, one of the first O/RM frameworks, but nowadays it is not tied to it, for example, it has .NET specific features, and has evolved in different ways from those of its Java counterpart. Current version is 3.3, with 3.4 on the horizon. It currently targets .NET 3.5, but can be used as well in .NET 4, it only makes no use of any of its specific functionality. You can find its home page at NHForge. Entity Framework 1 came out with .NET 3.5 and is now on its second major version, despite being version 4. Code First sits on top of it and but came separately and will also continue to be released out of line with major .NET distributions. It is currently on version 4.3.1 and version 5 will be released together with .NET Framework 4.5. All versions will target the current version of .NET, at the time of their release. Its home location is located at MSDN. Architecture In NHibernate, there is a separation between the Unit of Work and the configuration and model instances. You start off by creating a Configuration object, where you specify all global NHibernate settings such as the database and dialect to use, the batch sizes, the mappings, etc, then you build an ISessionFactory from it. The ISessionFactory holds model and metadata that is tied to a particular database and to the settings that came from the Configuration object, and, there will typically be only one instance of each in a process. Finally, you create instances of ISession from the ISessionFactory, which is the NHibernate representation of the Unit of Work and Identity Map. This is a lightweight object, it basically opens and closes a database connection as required and keeps track of the entities associated with it. ISession objects are cheap to create and dispose, because all of the model complexity is stored in the ISessionFactory and Configuration objects. As for Entity Framework, the ObjectContext/DbContext holds the configuration, model and acts as the Unit of Work, holding references to all of the known entity instances. This class is therefore not lightweight as its NHibernate counterpart and it is not uncommon to see examples where an instance is cached on a field. Mappings Both NHibernate and Entity Framework (Code First) support the use of POCOs to represent entities, no base classes are required (or even possible, in the case of NHibernate). As for mapping to and from the database, NHibernate supports three types of mappings: XML-based, which have the advantage of not tying the entity classes to a particular O/RM; the XML files can be deployed as files on the file system or as embedded resources in an assembly; Attribute-based, for keeping both the entities and database details on the same place at the expense of polluting the entity classes with NHibernate-specific attributes; Strongly-typed code-based, which allows dynamic creation of the model and strongly typing it, so that if, for example, a property name changes, the mapping will also be updated. Entity Framework can use: Attribute-based (although attributes cannot express all of the available possibilities – for example, cascading); Strongly-typed code mappings. Database Support With NHibernate you can use mostly any database you want, including: SQL Server; SQL Server Compact; SQL Server Azure; Oracle; DB2; PostgreSQL; MySQL; Sybase Adaptive Server/SQL Anywhere; Firebird; SQLLite; Informix; Any through OLE DB; Any through ODBC. Out of the box, Entity Framework only supports SQL Server, but a number of providers exist, both free and commercial, for some of the most used databases, such as Oracle and MySQL. See a list here. Inheritance Strategies Both NHibernate and Entity Framework support the three canonical inheritance strategies: Table Per Type Hierarchy (Single Table Inheritance), Table Per Type (Class Table Inheritance) and Table Per Concrete Type (Concrete Table Inheritance). Associations Regarding associations, both support one to one, one to many and many to many. However, NHibernate offers far more collection types: Bags of entities or values: unordered, possibly with duplicates; Lists of entities or values: ordered, indexed by a number column; Maps of entities or values: indexed by either an entity or any value; Sets of entities or values: unordered, no duplicates; Arrays of entities or values: indexed, immutable. Querying NHibernate exposes several querying APIs: LINQ is probably the most used nowadays, and really does not need to be introduced; Hibernate Query Language (HQL) is a database-agnostic, object-oriented SQL-alike language that exists since NHibernate’s creation and still offers the most advanced querying possibilities; well suited for dynamic queries, even if using string concatenation; Criteria API is an implementation of the Query Object pattern where you create a semi-abstract conceptual representation of the query you wish to execute by means of a class model; also a good choice for dynamic querying; Query Over offers a similar API to Criteria, but using strongly-typed LINQ expressions instead of strings; for this, although more refactor-friendlier that Criteria, it is also less suited for dynamic queries; SQL, including stored procedures, can also be used; Integration with Lucene.NET indexer is available. As for Entity Framework: LINQ to Entities is fully supported, and its implementation is considered very complete; it is the API of choice for most developers; Entity-SQL, HQL’s counterpart, is also an object-oriented, database-independent querying language that can be used for dynamic queries; SQL, of course, is also supported. Caching Both NHibernate and Entity Framework, of course, feature first-level cache. NHibernate also supports a second-level cache, that can be used among multiple ISessionFactorys, even in different processes/machines: Hashtable (in-memory); SysCache (uses ASP.NET as the cache provider); SysCache2 (same as above but with support for SQL Server SQL Dependencies); Prevalence; SharedCache; Memcached; Redis; NCache; Appfabric Caching. Out of the box, Entity Framework does not have any second-level cache mechanism, however, there are some public samples that show how we can add this. ID Generators NHibernate supports different ID generation strategies, coming from the database and otherwise: Identity (for SQL Server, MySQL, and databases who support identity columns); Sequence (for Oracle, PostgreSQL, and others who support sequences); Trigger-based; HiLo; Sequence HiLo (for databases that support sequences); Several GUID flavors, both in GUID as well as in string format; Increment (for single-user uses); Assigned (must know what you’re doing); Sequence-style (either uses an actual sequence or a single-column table); Table of ids; Pooled (similar to HiLo but stores high values in a table); Native (uses whatever mechanism the current database supports, identity or sequence). Entity Framework only supports: Identity generation; GUIDs; Assigned values. Properties NHibernate supports properties of entity types (one to one or many to one), collections (one to many or many to many) as well as scalars and enumerations. It offers a mechanism for having complex property types generated from the database, which even include support for querying. It also supports properties originated from SQL formulas. Entity Framework only supports scalars, entity types and collections. Enumerations support will come in the next version. Events and Interception NHibernate has a very rich event model, that exposes more than 20 events, either for synchronous pre-execution or asynchronous post-execution, including: Pre/Post-Load; Pre/Post-Delete; Pre/Post-Insert; Pre/Post-Update; Pre/Post-Flush. It also features interception of class instancing and SQL generation. As for Entity Framework, only two events exist: ObjectMaterialized (after loading an entity from the database); SavingChanges (before saving changes, which include deleting, inserting and updating). Tracking Changes For NHibernate as well as Entity Framework, all changes are tracked by their respective Unit of Work implementation. Entities can be attached and detached to it, Entity Framework does, however, also support self-tracking entities. Optimistic Concurrency Control NHibernate supports all of the imaginable scenarios: SQL Server’s ROWVERSION; Oracle’s ORA_ROWSCN; A column containing date and time; A column containing a version number; All/dirty columns comparison. Entity Framework is more focused on Entity Framework, so it only supports: SQL Server’s ROWVERSION; Comparing all/some columns. Batching NHibernate has full support for insertion batching, but only if the ID generator in use is not database-based (for example, it cannot be used with Identity), whereas Entity Framework has no batching at all. Cascading Both support cascading for collections and associations: when an entity is deleted, their conceptual children are also deleted. NHibernate also offers the possibility to set the foreign key column on children to NULL instead of removing them. Flushing Changes NHibernate’s ISession has a FlushMode property that can have the following values: Auto: changes are sent to the database when necessary, for example, if there are dirty instances of an entity type, and a query is performed against this entity type, or if the ISession is being disposed; Commit: changes are sent when committing the current transaction; Never: changes are only sent when explicitly calling Flush(). As for Entity Framework, changes have to be explicitly sent through a call to AcceptAllChanges()/SaveChanges(). Lazy Loading NHibernate supports lazy loading for Associated entities (one to one, many to one); Collections (one to many, many to many); Scalar properties (thing of BLOBs or CLOBs). Entity Framework only supports lazy loading for: Associated entities; Collections. Generating and Updating the Database Both NHibernate and Entity Framework Code First (with the Migrations API) allow creating the database model from the mapping and updating it if the mapping changes. Extensibility As you can guess, NHibernate is far more extensible than Entity Framework. Basically, everything can be extended, from ID generation, to LINQ to SQL transformation, HQL native SQL support, custom column types, custom association collections, SQL generation, supported databases, etc. With Entity Framework your options are more limited, at least, because practically no information exists as to what can be extended/changed. It features a provider model that can be extended to support any database. Integration With Other Microsoft APIs and Tools When it comes to integration with Microsoft technologies, it will come as no surprise that Entity Framework offers the best support. For example, the following technologies are fully supported: ASP.NET (through the EntityDataSource); ASP.NET Dynamic Data; WCF Data Services; WCF RIA Services; Visual Studio (through the integrated designer). Documentation This is another point where Entity Framework is superior: NHibernate lacks, for starters, an up to date API reference synchronized with its current version. It does have a community mailing list, blogs and wikis, although not much used. Entity Framework has a number of resources on MSDN and, of course, several forums and discussion groups exist. Conclusion Like I said, this is a personal list. I may come as a surprise to some that Entity Framework is so behind NHibernate in so many aspects, but it is true that NHibernate is much older and, due to its open-source nature, is not tied to product-specific timeframes and can thus evolve much more rapidly. I do like both, and I chose whichever is best for the job I have at hands. I am looking forward to the changes in EF5 which will add significant value to an already interesting product. So, what do you think? Did I forget anything important or is there anything else worth talking about? Looking forward for your comments!

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Questions regarding ordering of catch statements in catch block - compiler specific or language stan

    - by Andy
    I am currently using Visual Studio Express C++ 2008, and have some questions about catch block ordering. Unfortunately, I could not find the answer on the internet so I am posing these questions to the experts. I notice that unless catch (...) is placed at the end of a catch block, the compilation will fail with error C2311. For example, the following would compile: catch (MyException) { } catch (...) { } while the following would not: catch (...) { } catch (MyException) { } a. Could I ask if this is defined in the C++ language standard, or if this is just the Microsoft compiler being strict? b. Do C# and Java have the same rules as well? c. As an aside, I have also tried making a base class and a derived class, and putting the catch statement for the base class before the catch statement for the derived class. This compiled without problems. Are there no language standards guarding against such practice please?

    Read the article

  • Trouble binding WPF Menu to ItemsSource

    - by chaiguy
    I would like to avoid having to build a menu manually in XAML or code, by binding to a list of ICommand-derived objects. However, I'm experiencing a bit of a problem where the resulting menu has two levels of menu-items (i.e. each MenuItem is contained in a MenuItem): My guess is that this is happening because WPF is automatically generating a MenuItem for my binding, but the "viewer" I'm using actually already is a MenuItem (it's derived from MenuItem): <ContextMenu x:Name="selectionContextMenu" ItemsSource="{Binding Source={x:Static OrangeNote:Note.MultiCommands}}" ItemContainerStyleSelector="{StaticResource separatorStyleSelector}"> <ContextMenu.ItemTemplate> <DataTemplate> <Viewers:NoteCommandMenuItemViewer CommandParameter="{Binding Source={x:Static OrangeNote:App.Screen}, Path=SelectedNotes}" /> </DataTemplate> </ContextMenu.ItemTemplate> </ContextMenu> (The ItemContainerStyleSelector is from http://bea.stollnitz.com/blog/?p=23, which allows me to have Separator elements inside my bound source.) So, the menu is bound to a collection of ICommands, and each item's CommandParameter is set to the same global target (which happens to be a collection, but that's not important). My question is, is there any way I can bind this such that WPF doesn't automatically wrap each item in a MenuItem?

    Read the article

  • QT: I've inherited from QTreeView. I've inherited from QStandardItem. How do i Style the items?

    - by San Jacinto
    My Google skills must be failing me today. I've inherited from QTreeView to create a TreeView that stores a QStandardItemModel instead of a QAbstractItemModel. I have also inherited from QStandardItem to create a class to store my data in an item as is necessary. I've successfully inserted my derived QStandardItem into my derived QTreeView's QStandardItemModel. Now the trouble is, I can't figure out how to style it. I know that QTreeView has a setStyleSheet(QString) member, but I can't seem to get it working. It may be as simple as I'm not styling the correct attribute. Any pointers would be appreciated. Thanks. For clarity, here are my class defs. class SurveyTreeItem : public QStandardItem { public: SurveyTreeItem(); SurveyTreeItem( const QString & text ); ~SurveyTreeItem(); }; class StandardItemModelTreeView : public QTreeView { public: StandardItemModelTreeView(QWidget* parent = 0); ~StandardItemModelTreeView(); QStandardItemModel* getStandardItemModel(); }; I've tried the following StyleSheets: StandardTreeView::Item { font: 87 12pt 'Arial Black'; } StandardTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::QStandardItem { font: 87 12pt 'Arial Black'; } QTreeView::Item { font: 87 12pt 'Arial Black'; } QTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; } StandardTreeView::SurveyTreeItem { font: 87 12pt 'Arial Black'; }

    Read the article

  • SECURITY Flaws in this design for User authentication.

    - by Shao
    SECURITY Flaws in this design for User authentication. From: http://wiki.pylonshq.com/display/pylonscookbook/Simple+Homegrown+Authentication Note: a. Project follows the MVC pattern. b. Only a user with a valid username and password is allowed submit something. Design: a. Have a base controller from which all controllers are derived from. b. Before any of the actions in the derived controllers are called the system calls a before action in the base controller. c. In each controller user hardcodes the actions that need to be verified in an array. d. The before action first looks in the array that has the actions that are protected and sees if a user is logged in or not by peaking into the session. If a user is present then user is allowed to submit otherwise user is redirected to login page. What do you think?

    Read the article

  • Can protobuf-net serialize this combination of interface and generic collection?

    - by tsupe
    I am trying to serialize a ItemTransaction and protobuf-net (r282) is having a problem. ItemTransaction : IEnumerable<KeyValuePair<Type, IItemCollection>></code> and ItemCollection is like this: FooCollection : ItemCollection<Foo> ItemCollection<T> : BindingList<T>, IItemCollection IItemCollection : IList<Item> where T is a derived type of Item. ItemCollection also has a property of type IItemCollection. I am serializing like this: IItemCollection itemCol = someService.Blah(...); ... SerializeWithLengthPrefix<IItemCollection>(stream, itemCol, PrefixStyle.Base128); My eventual goal is to serialize ItemTransaction, but am snagged with IItemCollection. Item and it's derived types can be [de]serialized with no issues, see [1], but deserializing an IItemCollection fails (serializing works). ItemCollection has a ItemExpression property and when deserializing protobuf can't create an abstract class. This makes sense to me, but I'm not sure how to get through it. ItemExpression<T> : ItemExpression, IItemExpression ItemExpression : Expression ItemExpression is abstract as is Expression How do I get this to work properly? Also, I am concerned that ItemTransaction will fail since the IItemCollections are going to be differing and unknown at compile time (an ItemTransaction will have FooCollection, BarCollection, FlimCollection, FlamCollection, etc). What am I missing (Marc) ? [1] http://stackoverflow.com/questions/2276104/protobuf-net-deserializing-across-assembly-boundaries

    Read the article

  • Virtual properties duplicated during serialization when XmlElement attribute used

    - by Laramie
    The Goal: XML serialize an object that contains a list of objects of that and its derived types. The resulting XML should not use the xsi:type attribute to describe the type, to wit the names of the serialized XML elements would be an assigned name specific to the derived type, not always that of the base class, which is the default behavior. The Attempt: After exploring IXmlSerializable and IXmlSerializable with eerie XmlSchemaProvider methods and voodoo reflection to return specialized schemas and an XmlQualifiedName over the course of days, I found I was able to use the simple [XmlElement] attribute to accomplish the goal... almost. The Problem: Overridden properties appear twice when serializing. The exception reads "The XML element 'overriddenProperty' from namespace '' is already present in the current scope. Use XML attributes to specify another XML name or namespace for the element." I attempted using a *Specified property (see code), but it didn't work. Sample Code: Class Declaration using System; using System.Collections.Generic; using System.Xml.Serialization; [XmlInclude(typeof(DerivedClass))] public class BaseClass { public BaseClass() { } [XmlAttribute("virt")] public virtual string Virtual { get; set; } [XmlIgnore] public bool VirtualSpecified { get { return (this is BaseClass); } set { } } [XmlElement(ElementName = "B", Type = typeof(BaseClass), IsNullable = false)] [XmlElement(ElementName = "D", Type = typeof(DerivedClass), IsNullable = false)] public List<BaseClass> Children { get; set; } } public class DerivedClass : BaseClass { public DerivedClass() { } [XmlAttribute("virt")] public override string Virtual { get { return "always return spackle"; } set { } } } Driver: BaseClass baseClass = new BaseClass() { Children = new List<BaseClass>() }; BaseClass baseClass2 = new BaseClass(){}; DerivedClass derivedClass1 = new DerivedClass() { Children = new List<BaseClass>() }; DerivedClass derivedClass2 = new DerivedClass() { Children = new List<BaseClass>() }; baseClass.Children.Add(derivedClass1); baseClass.Children.Add(derivedClass2); derivedClass1.Children.Add(baseClass2); I've been wrestling with this on and off for weeks and can't find the answer anywhere.

    Read the article

  • How to wrap a thirdparty library COM class for notifications in a C++ project

    - by geocoin
    A thirdparty vendor has provided a COM interface as an external API to their product, and for sending instructions to it, it's fine with the mechanism of generating class wrappers with visual studio by importing from the dll, then creating instances of the created classes, calling their CreateDispatch(), using the access functions and releasing when done. For return notifications, I'd normally create an EventsListener class derived from IDispatch using the Invoke function to handle events returning from the interface. This vendor has created an Events lass which I have to wrap and expose, then explicitly tell the installation where to look. all the example are given is C# where it's really easy, but I'm struggling on how to do it in C++ in the C# example, the interop dll provided is simply added as a reference and derived into a class like so: using System; using System.Runtime.InteropServices; using System.Windows.Forms; using System.Text; using <THIER INTEROP LIB> namespace some.example.namespace { [ComVisible(true)] public class EventViewer : IEvents //where IEvents is their events class { public void OnEvent(EventID EventID, object pData) //overridden function { //event handled here } } } In broad terms I assume that I must create a COM interface, since they require a ProgID from me to instantiate, but how do I derive that's been wrapped by the import and then expose the created class to COM I'm just not sure where to even start, as all the tutorials I've seen so far talk in terms of creating brand new classes not wrapping a third party one

    Read the article

  • Validation not bubbling up to my other models.

    - by DJTripleThreat
    Ok, I have a relationship between People, Users and Employees such that All Employees are Users and all Users are People. Person is an abstract class that User is derived from and Employee is derived from that. Now... I have an EmployeesController class and the create method looks like this: def create @employee = Employee.new(params[:employee]) @employee.user = User.new(params[:user]) @employee.user.person = Person.new(params[:person]) respond_to do |format| if @employee.save flash[:notice] = 'Employee was successfully created.' format.html { redirect_to(@employee) } format.xml { render :xml => @employee, :status => :created, :location => @employee } else format.html { render :action => "new" } format.xml { render :xml => @employee.errors, :status => :unprocessable_entity } end end end As you can see, when I'm using the :polymorphic => true clause, the way you access the super class is by doing something like @derived_class_variable.super_class_variable.super_super_etc. The Person class has a validates_presence_of :first_name and when it is satisfied, on my form, everything is OK. However, if I leave out the first name, it won't prevent the employee from being saved. What happens is that the employee record is saved but the person record isn't (because of the validation). How can I get the validation errors to show up in the flash object?

    Read the article

  • Handle cases where Nhibernate subclass does not exist

    - by kaykayman
    I have a scenario where I am using nhibernate to map records from one table to several different derived classes based on a discriminator. public class BaseClass { } public class DerivedClass0 : BaseClass { } public class DerivedClass1 : BaseClass { } public class DerivedClass2 : BaseClass { } I then use nhibernate's DiscriminateSubClassesOnColumn() method and alter the configuration to include <subclass name="DerivedClass0" extends="BaseClass" discriminator-value="discriminator0" /> <subclass name="DerivedClass1" extends="BaseClass" discriminator-value="discriminator1" /> <subclass name="DerivedClass2" extends="BaseClass" discriminator-value="discriminator2" /> so that when mapped, these classes are cast to their derived classes and not BaseClass. However, there are some records in my database which have a discriminator which does not have a corresponding subclass. In these cases, nHibernate throws an error: "Object with id: 'xxx' was not of the specified subclass..." Is there some way I can handle this, so that any records which do not have a corresponding subclass are cast to BaseClass rather than an error being thrown? I have simplified the above as much as possible, however it is worth noting that the XML is edited dynamically which is why I am referencing fluent nhibernate [DiscriminateSubClassesOnColumn()] and XML at the same time. The following things (which would help) are not an option: I cannot correct the data to remove records which are invalid I cannot create subclasses for those records which do not have one I need to handle cases where nHibernate tries to map on a discriminator and finds that one does not exist.

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • Converting currencies via intermediate currencies.

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • Linq 2 SQL Store inherited classes - not mapped to tables in the database

    - by user348672
    Hi. I'm trying to introduce the Linq2SQL technique into the project and have encountered the following issue: I've created ORM classes based on the Northwind database. In some other application I create several classes derived from the Linq2SQL classes. I'm able to add such a class to EntitySet but the application fails to submit changes. Is there any way around this? Sample code(MyClass is derived from the Order): DataClasses1DataContext northwind = new DataClasses1DataContext(); Product chai = northwind.Products.Single(p => p.ProductName == "Chai"); Product tofu = northwind.Products.Single(p => p.ProductName == "Tofu"); Order myOrder = new Order(); myOrder.OrderDate = DateTime.Now; myOrder.RequiredDate = DateTime.Now.AddDays(1); myOrder.Freight = 34; Order_Detail myItem1 = new Order_Detail(); myItem1.Product = chai; myItem1.Quantity = 12345; Order_Detail myItem2 = new Order_Detail(); myItem2.Product = tofu; myItem2.Quantity = 3; myOrder.Order_Details.Add(myItem1); myOrder.Order_Details.Add(myItem2); Customer myCustomer = northwind.Customers.Single(c => c.CompanyName == "B's Beverages"); MyClass newOrder = new MyClass(); newOrder.OrderDate = DateTime.Now; newOrder.RequiredDate = DateTime.Now.AddDays(31); newOrder.Freight = 35; Order_Detail myItem3 = new Order_Detail(); myItem3.Product = tofu; myItem3.Quantity = 3; newOrder.Order_Details.Add(myItem3); myCustomer.Orders.Add(myOrder); myCustomer.Orders.Add(newOrder); As I said I'm able to add the newOrder object but unable to submit into the database.

    Read the article

  • Python __subclasses__() not listing subclasses

    - by Mridang Agarwalla
    I cant seem to list all derived classes using the __subclasses__() method. Here's my directory layout: import.py backends __init__.py --digger __init__.py base.py test.py --plugins plugina_plugin.py From import.py i'm calling test.py. test.py in turn iterates over all the files in the plugins directory and loads all of them. test.py looks like this: import os import sys import re sys.path.append(os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))))) sys.path.append(os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))), 'plugins')) from base import BasePlugin class TestImport: def __init__(self): print 'heeeeello' PLUGIN_DIRECTORY = os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))), 'plugins') for filename in os.listdir (PLUGIN_DIRECTORY): # Ignore subfolders if os.path.isdir (os.path.join(PLUGIN_DIRECTORY, filename)): continue else: if re.match(r".*?_plugin\.py$", filename): print ('Initialising plugin : ' + filename) __import__(re.sub(r".py", r"", filename)) print ('Plugin system initialized') print BasePlugin.__subclasses__() The problem us that the __subclasses__() method doesn't show any derived classes. All plugins in the plugins directory derive from a base class in the base.py file. base.py looks like this: class BasePlugin(object): """ Base """ def __init__(self): pass plugina_plugin.py looks like this: from base import BasePlugin class PluginA(BasePlugin): """ Plugin A """ def __init__(self): pass Could anyone help me out with this? Whatm am i doing wrong? I've racked my brains over this but I cant seem to figure it out Thanks.

    Read the article

  • NSFetchedResultsController sections localized sorted

    - by Gerd
    How could I use the NSFetchedResultsController with translated sort key and sectionKeyPath? Problem: I have ID in the property "type" in the database like typeA, typeB, typeC,... and not the value directly because it should be localized. In English typeA=Bird, typeB=Cat, typeC=Dog in German it would be Vogel, Katze, Hund. With a NSFetchedResultController with sort key and sectionKeyPath on "type" I receive the order and sections - typeA - typeB - typeC Next I translate for display and everything is fine in English: - Bird - Cat - Dog Now I switch to German and receive a wrong sort order - Vogel - Katze - Hund because it still sorts by typeA, typeB, typeC So I'm looking for a way to localize the sort for the NSFetchedResultsController. I tried the transient property approach, but this doesn't work for the sort key because the sort key need to be in the entity. I have no other idea. But I can't believe that's not possible to use NSFetchedResultsController on a derived attribute required for localization? There are related discussions like http://stackoverflow.com/questions/1384345/using-custom-sections-with-nsfetchedresultscontroller but the difference is that the custom section names and the sort key have probably the same order. Not in my case and this is the main difference. At the end I would need a sort order for the necessary NSSortDescriptor on a derived attribute, I guess. This sort order has also to serve for the sectionKeyPath. Thanks for any hint.

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • Does QThread::sleep() require the event loop to be running?

    - by suszterpatt
    I have a simple client-server program written in Qt, where processes communicate using MPI. The basic design I'm trying to implement is the following: The first process (the "server") launches a GUI (derived from QMainWindow), which listens for messages from the clients (using repeat fire QTimers and asynchronous MPI receive calls), updates the GUI depending on what messages it receives, and sends a reply to every message. Every other process (the "clients") runs in an infinite loop, and all they are intended to do is send a message to the server process, receive the reply, go to sleep for a while, then wake up and repeat. Every process instantiates a single object derived from QThread, and calls its start() method. The run() method of these classes all look like this: from foo.cpp: void Foo::run() { while (true) { // Send message to the first process // Wait for a reply // Do uninteresting stuff with the reply sleep(3); // also tried QThread::sleep(3) } } In the client's code, there is no call to exec() anywhere, so no event loop should start. The problem is that the clients never wake up from sleeping (if I surround the sleep() call with two writes to a log file, only the first one is executed, control never reaches the second). Is this because I didn't start the event loop? And if so, what is the simplest way to achieve the desired functionality?

    Read the article

  • Which ORM to use?

    - by Paja
    I'm developing an application which will have these classes: class Shortcut { public string Name { get; } public IList<Trigger> Triggers { get; } public IList<Action> Actions { get; } } class Trigger { public string Name { get; } } class Action { public string Name { get; } } And I will have 20+ more classes, which will derive from Trigger or Action, so in the end, I will have one Shortcut class, 15 Action-derived classes and 5 Trigger-derived classes. My question is, which ORM will best suit this application? EF, NH, SubSonic, or maybe something else (Linq2SQL)? I will be periodically releasing new application versions, adding more triggers and actions (or changing current triggers/actions), so I will have to update database schema as well. I don't know if EF or NH provides any good methods to easily update the schema. Or if they do, is there any tutorial how to do that? I've already found this article about NH schema updating, quoting: Fortunately NHibernate provides us the possibility to update an existing schema, that is NHibernate creates an update script which can the be applied to the database. I've never found how to actually generate the update script, so I can't tell NH to update the schema. Maybe I've misread something, I just didn't found it. Last note: If you suggest EF, will be EF 1.0 suitable as well? I would rather use some older .NET than 4.0.

    Read the article

  • How to implement an interface class using the non-virtual interface idiom in C++?

    - by andreas buykx
    Hi all, In C++ an interface can be implemented by a class with all its methods pure virtual. Such a class could be part of a library to describe what methods an object should implement to be able to work with other classes in the library: class Lib::IFoo { public: virtual void method() = 0; }; : class Lib::Bar { public: void stuff( Lib::IFoo & ); }; Now I want to to use class Lib::Bar, so I have to implement the IFoo interface. For my purposes I need a whole of related classes so I would like to work with a base class that guarantees common behavior using the NVI idiom: class FooBase : public IFoo // implement interface IFoo { public: void method(); // calls methodImpl; private: virtual void methodImpl(); }; The non-virtual interface (NVI) idiom ought to deny derived classes the possibility of overriding the common behavior implemented in FooBase::method(), but since IFoo made it virtual it seems that all derived classes have the opportunity to override the FooBase::method(). If I want to use the NVI idiom, what are my options other than the pImpl idiom already suggested (thanks space-c0wb0y).

    Read the article

  • What OOP pattern to use when only adding new methods, not data?

    - by Jonathon Reinhart
    Hello eveyone... In my app, I have deal with several different "parameters", which derive from IParameter interface, and also ParamBase abstract base class. I currently have two different parameter types, call them FooParameter and BarParameter, which both derive from ParamBase. Obviously, I can treat them both as IParameters when I need to deal with them generically, or detect their specific type when I need to handle their specific functionality. My question lies in specific FooParameters. I currently have a few specific ones with their own classes which derive from FooParameter, we'll call them FP12, FP13, FP14, etc. These all have certain characteristics, which make me treat them differently in the UI. (Most have names associated with the individual bits, or ranges of bits). Note that these specific, derived FP's have no additional data associated with them, only properties (which refer to the same data in different ways) or methods. Now, I'd like to keep all of these parameters in a Dictionary<String, IParameter> for easy generic access. The problem is, if I want to refer to a specific one with the special GUI functions, I can't write: FP12 fp12 = (FP12)paramList["FP12"] because you can't downcast to a derived type (rightfully so). But in my case, I didn't add any data, so the cast would theoretically work. What type of programming model should I be using instead? Thanks!

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • Creating PowerShell Automatic Variables from C#

    - by Uros Calakovic
    I trying to make automatic variables available to Excel VBA (like ActiveSheet or ActiveCell) also available to PowerShell as 'automatic variables'. PowerShell engine is hosted in an Excel VSTO add-in and Excel.Application is available to it as Globals.ThisAddin.Application. I found this thread here on StackOverflow and started created PSVariable derived classes like: public class ActiveCell : PSVariable { public ActiveCell(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveCell; } } } public class ActiveSheet : PSVariable { public ActiveSheet(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveSheet; } } } and adding their instances to the current POwerShell session: runspace.SessionStateProxy.PSVariable.Set(new ActiveCell("ActiveCell")); runspace.SessionStateProxy.PSVariable.Set(new ActiveSheet("ActiveSheet")); This works and I am able to use those variables from PowerShell as $ActiveCell and $ActiveSheet (their value change as Excel active sheet or cell change). Then I read PSVariable documentation here and saw this: "There is no established scenario for deriving from this class. To programmatically create a shell variable, create an instance of this class and set it by using the PSVariableIntrinsics class." As I was deriving from PSVariable, I tried to use what was suggested: PSVariable activeCell = new PSVariable("ActiveCell"); activeCell.Value = Globals.ThisAddIn.Application.ActiveCell; runspace.SessionStateProxy.PSVariable.Set(activeCell); Using this, $ActiveCell appears in my PowerShell session, but its value doesn't change as I change the active cell in Excel. Is the above comment from PSVariable documentation something I should worry about, or I can continue creating PSVariable derived classes? Is there another way of making Excel globals available to PowerShell?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >