Search Results

Search found 11141 results on 446 pages for 'base conversion'.

Page 78/446 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to install php-devel under CentOS 6.3 x64?

    - by Jeremy Dicaire
    I'm trying to install php-devel on my CentOS 6.3 VPS and get a failed dependencies test. From phpinfos(): SYSTEM Linux 2.6.32-279.5.2.el6.x86_64 #1 x86_64 NTS error: Failed dependencies: php(x86-64) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.x86_64 I've tried the following RPM packages: php54w-devel-5.4.6-1.w6.x86_64.rpm php-devel-5.4.6-1.el6.remi.i686.rpm php-devel-5.4.6-1.el6.remi.x86_64.rpm One of the above package gave me this: root@sv1 [/tmp]# rpm -Uvh php-devel-5.4.6-1.el6.remi.i686.rpm warning: php-devel-5.4.6-1.el6.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php(x86-32) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.i686 libbz2.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 libcom_err.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libcrypto.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libedit.so.0 is needed by php-devel-5.4.6-1.el6.remi.i686 libgmp.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libgssapi_krb5.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libk5crypto.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libkrb5.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libncurses.so.5 is needed by php-devel-5.4.6-1.el6.remi.i686 libssl.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libstdc++.so.6 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.4.30) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.5.2) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.0) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.11) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.5) is needed by php-devel-5.4.6-1.el6.remi.i686 libz.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 I don't know how to fix this error and download all the dependencies. Thank you. Edit 1 (for quanta): Here is "yum repolist": root@sv1 [/tmp]# yum repolist Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.cogentco.com * extras: mirror.atlanticmetro.net * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.choopa.net repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,099 root@sv1 [/tmp]# rpm -qa | grep php didn't return any result. I forgot to mention I'm using cPanel/WHM Edit 2 after adding the Remi repo: >root@sv1 [/etc/yum.repos.d]# yum clean all Loaded plugins: fastestmirror, presto Cleaning repos: base epel extras remi remi-test rpmforge updates Cleaning up Everything Cleaning up list of fastest mirrors 1 delta-package files removed, by presto >root@sv1 [/etc/yum.repos.d]# yum repolist Loaded plugins: fastestmirror, presto Determining fastest mirrors epel/metalink | 12 kB 00:00 * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net base | 3.7 kB 00:00 base/primary_db | 4.5 MB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 4.7 MB 00:00 extras | 3.0 kB 00:00 extras/primary_db | 6.3 kB 00:00 remi | 2.9 kB 00:00 remi/primary_db | 330 kB 00:00 remi-test | 2.9 kB 00:00 remi-test/primary_db | 85 kB 00:00 rpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.5 MB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 2.3 MB 00:00 repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 remi Les RPM de remi pour Enterprise Linux 6 - x86_64 96+564 remi-test Les RPM de remi en test pour Enterprise Linux 6 - x86_64 25+139 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,220 >root@sv1 [/etc/yum.repos.d]# yum install php-devel Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net Setting up Install Process No package php-devel available. Error: Nothing to do >root@sv1 [/etc/yum.repos.d]#

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Updating PHP on Linux - "No Packages marked for Update"?

    - by Aristotle
    I'm very new to server-administration, but I was thinking the task of updating PHP to 5.2+ should be relatively simple. Online I found that the following was allegedly sufficient to do this: yum update php But when I run this, the following is output: [root@ip-XXX-XXX-XXX-XXX /]# php -v PHP 5.1.6 (cli) (built: Jan 13 2010 17:13:05) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies [root@ip-XXX-XXX-XXX-XXX /]# yum update php Loaded plugins: fastestmirror Determining fastest mirrors * addons: p3plmirror02.prod.phx3.secureserver.net * base: p3plmirror02.prod.phx3.secureserver.net * extras: p3plmirror02.prod.phx3.secureserver.net * turbopanel-base: p3plmirror02.prod.phx3.secureserver.net * turbopanel-centos5: p3plmirror02.prod.phx3.secureserver.net * update: p3plmirror02.prod.phx3.secureserver.net addons | 951 B 00:00 addons/primary | 201 B 00:00 base | 2.1 kB 00:00 base/primary_db | 1.6 MB 00:00 extras | 1.1 kB 00:00 extras/primary | 107 kB 00:00 extras 325/325 turbopanel-base | 951 B 00:00 turbopanel-base/primary | 72 kB 00:00 turbopanel-base 494/494 turbopanel-centos5 | 951 B 00:00 turbopanel-centos5/primary | 2.1 kB 00:00 turbopanel-centos5 8/8 update | 1.9 kB 00:00 update/primary_db | 463 kB 00:00 Setting up Update Process No Packages marked for Update [root@ip-XXX-XXX-XXX-XXX /]# php -v PHP 5.1.6 (cli) (built: Jan 13 2010 17:13:05) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technolog [root@ip-XXX-XXX-XXX-XXX /]# No Packages marked for Update [root@ip-XXX-XXX-XXX-XXX /]# php -v bash: No: command not found [root@ip-XXX-XXX-XXX-XXX /]# [root@ip-XXX-XXX-XXX-XXX /]# php -v bash: [root@ip-XXX-XXX-XXX-XXX: command not found [root@ip-XXX-XXX-XXX-XXX /]# PHP 5.1.6 (cli) (built: Jan 13 2010 17:13:05) bash: syntax error near unexpected token `(' [root@ip-XXX-XXX-XXX-XXX /]# Copyright (c) 1997-2006 The PHP Group bash: syntax error near unexpected token `c' [root@ip-XXX-XXX-XXX-XXX /]# Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies bash: syntax error near unexpected token `(' [root@ip-XXX-XXX-XXX-XXX /]# My PHP version is 5.1.6 before, and after running the command. Am I being too naive here with this update process? Is there a more verbose route that is necessary for me to take?

    Read the article

  • CentOs 6.3 can't install Git

    - by drivard
    I am trying to install git on an OpenVZ container using the CentOs 6.3 precreated template. When I try the command line yum install git I get the message: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com Setting up Install Process No package git available. Error: Nothing to do As I understand, the git package should be in the centos6 base repository: http://pkgs.org/centos-6-rhel-6/centos-rhel-x86_64/git-1.7.1-2.el6_0.1.x86_64.rpm.html But it doesn't find it, I even have the EPEL and RPMForge repo enabled, but still can't find the git package. yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: www.cubiculestudio.com * epel: mirror.csclub.uwaterloo.ca * extras: www.cubiculestudio.com * rpmforge: mirror.us.leaseweb.net * updates: www.cubiculestudio.com repo id repo name status base CentOS-6 - Base 4,776 epel Extra Packages for Enterprise Linux 6 - i386 6,523 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 4,501 updates CentOS-6 - Updates 596 vz-base vz-base 3 vz-updates vz-updates 0 repolist: 16,403 The weirdest thing is that my OpenVZ server is running on CentOs 6.3 and I was able to install git without any issue. Can you help me understanding why it doesn't find the package? Thank you in advance.

    Read the article

  • Error in requiring Sinatra gem

    - by Dan
    I'm having a hard time getting Sinatra running on my local setup, Ubuntu Karmic 9.10. The error getting thrown when I have require 'sinatra' is: NoMethodError: undefined method `[]' for nil:NilClass from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:891:in `compile' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:883:in `gsub' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:883:in `compile' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:856:in `route' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:838:in `get' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1077 from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:929:in `configure' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1076 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/local/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra.rb:4 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from (irb):2 from :0 I've tried: Uninstalling/reinstalling Sinatra Updating all gems Ensuring all dependencies exist (rack) Any ideas? Your time and help is greatly appreciated!

    Read the article

  • raw_id_fields for modelforms

    - by nbv4
    I have a modelform which has one field that is a ForeignKey value to a model which as 40,000 rows. The default modelform tries to create a select box with 40,000 options, which, to say the least is not ideal. Even more so when this modelform is used in a formset factory! In the admin, this is easiely avoidable by using "raw_id_fields", but there doesn't seem to be a modelform equivalent. How can I do this? Here is my modelform: class OpBaseForm(ModelForm): base = forms.CharField() class Meta: model = OpBase exclude = ['operation', 'routes'] extra = 0 raw_id_fields = ('base', ) #does nothing The first bolded line works by not creating the huge unwieldy selectbox, but when I try to save a fieldset of this form, I get the error: "OpBase.base" must be a "Base" instance. In order for the modelform to be saved, 'base' needs to be a Base instance. Apparently, a string representation of a Base primary key isn't enough (at least not automatically). I need some kind of mechanism to change the string that is given my the form, to a Base instance. And this mechanism has to work in a formset. Any ideas? If only raw_id_fields would work, this would be easy as cake. But as far as I can tell, it only is available in the admin.

    Read the article

  • Emulating Dynamic Dispatch in C++ based on Template Parameters

    - by Jon Purdy
    This is heavily simplified for the sake of the question. Say I have a hierarchy: struct Base { virtual int precision() const = 0; }; template<int Precision> struct Derived : public Base { typedef Traits<Precision>::Type Type; Derived(Type data) : value(data) {} virtual int precision() const { return Precision; } Type value; }; I want a function like: Base* function(const Base& a, const Base& b); Where the specific type of the result of the function is the same type as whichever of first and second has the greater Precision; something like the following pseudocode: template<class T> T* operation(const T& a, const T& b) { return new T(a.value + b.value); } Base* function(const Base& a, const Base& b) { if (a.precision() > b.precision()) return operation((A&)a, A(b.value)); else if (a.precision() < b.precision()) return operation(B(a.value), (B&)b); else return operation((A&)a, (A&)b); } Where A and B are the specific types of a and b, respectively. I want f to operate independently of how many instantiations of Derived there are. I'd like to avoid a massive table of typeid() comparisons, though RTTI is fine in answers. Any ideas?

    Read the article

  • WM_NOTIFY and superclass chaining issue in Win32

    - by DasMonkeyman
    For reference I'm using the window superclass method outlined in this article. The specific issue occurs if I want to handle WM_NOTIFY messages (i.e. for custom drawing) from the base control in the superclass I either need to reflect them back from the parent window or set my own window as the parent (passed inside CREATESTRUCT for WM_(NC)CREATE to the base class). This method works fine if I have a single superclass. If I superclass my superclass then I run into a problem. Now 3 WindowProcs are operating in the same HWND, and when I reflect WM_NOTIFY messages (or have them sent to myself from the parent trick above) they always go to the outermost (most derived) WindowProc. I have no way to tell if they're messages intended for the inner superclass (base messages are supposed to go to the first superclass) or messages intended for the outer superclass (messages from the inner superclass are intended for the outer superclass). These messages are indistinguishable because they all come from the same HWND with the same control ID. Is there any way to resolve this without creating a new window to encapsulate each level of inheritance? Sorry about the wall of text. It's a difficult concept to explain. Here's a diagram. single superclass: SuperA::WindowProc() - Base::WindowProc()---\ ^--------WM_NOTIFY(Base)--------/ superclass of a superclass: SuperB::WindowProc() - SuperA::WindowProc() - Base::WindowProc()---\ ^--------WM_NOTIFY(Base)--------+-----------------------/ ^--------WM_NOTIFY(A)-----------/ The WM_NOTIFY messages in the second case all come from the same HWND and control ID, so I cannot distinguish between the messages intended for SuperA (from Base) and messages intended for SuperB (from SuperA). Any ideas?

    Read the article

  • C++ polymorphism and slicing

    - by Draco Ater
    The following code, prints out Derived Base Base But I need every Derived object put into User::items, call its own print function, but not the base class one. Can I achieve that without using pointers? If it is not possible, how should I write the function that deletes User::items one by one and frees memory, so that there should not be any memory leaks? #include <iostream> #include <vector> #include <algorithm> using namespace std; class Base{ public: virtual void print(){ cout << "Base" << endl;} }; class Derived: public Base{ public: void print(){ cout << "Derived" << endl;} }; class User{ public: vector<Base> items; void add_item( Base& item ){ item.print(); items.push_back( item ); items.back().print(); } }; void fill_items( User& u ){ Derived d; u.add_item( d ); } int main(){ User u; fill_items( u ); u.items[0].print(); }

    Read the article

  • implementing a read/write field for an interface that only defines read

    - by PaulH
    I have a C# 2.0 application where a base interface allows read-only access to a value in a concrete class. But, within the concrete class, I'd like to have read/write access to that value. So, I have an implementation like this: public abstract class Base { public abstract DateTime StartTime { get; } } public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } internal set { start_time_ = value; } } } But, this gives me the error: Foo.cs(200,22): error CS0546: 'Foo.StartTime.set': cannot override because 'Base.StartTime' does not have an overridable set accessor I don't want the base class to have write access. But, I do want the concrete class to provide read/write access. Is there a way to make this work? Thanks, PaulH Unfortunately, Base can't be changed to an interface as it contains non-abstract functionality also. Something I should have thought to put in the original problem description. public abstract class Base { public abstract DateTime StartTime { get; } public void Buzz() { // do something interesting... } } My solution is to do this: public class Foo : Base { DateTime start_time_; public override DateTime StartTime { get { return start_time_; } } internal void SetStartTime { start_time_ = value; } } It's not as nice as I'd like, but it works.

    Read the article

  • How to add a has_many association on all models

    - by joshsz
    Right now I have an initializer that does this: ActiveRecord::Base.send :has_many, :notes, :as => :notable ActiveRecord::Base.send :accepts_nested_attributes_for, :notes It builds the association just fine, except when I load a view that uses it, the second load gives me: can't dup NilClass from: /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2184:in `dup' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2184:in `scoped_methods' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2188:in `current_scoped_methods' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2171:in `scoped?' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2439:in `send' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2439:in `initialize' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/reflection.rb:162:in `new' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/reflection.rb:162:in `build_association' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/association_collection.rb:423:in `build_record' /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/associations/association_collection.rb:102:in `build' (my app)/controllers/manifests_controller.rb:21:in `show' Any ideas? Am I doing this the wrong way? Interestingly if I move the association onto just the model I'm working with at the moment, I don't get this error. I figure I must be building the global association incorrectly.

    Read the article

  • The best way to write a jQuery plugin - If there is such a way?

    - by Nick Lowman
    There are quite a few ways to write plugins i.e. here's a nice example and what I've seen quite a lot of lately is the following code pattern and it's used by Doug Neiner here; (function($){ $.formatLink = function(el, options){ var base = this; base.$el = $(el); base.el = el; base.$el.data("formatLink", base); base.init = function(){ base.options = $.extend({}, $.formatLink.defaultOptions, options); //code here } base.init(); }; $.formatLink.defaultOptions = { }; $.fn.formatLink = function(options){ return this.each(function(){ (new $.formatLink(this, options)); }); }; })(jQuery); So, can anyone tell me the benefits of using the pattern above rather than the one below. I can't see the point in calling the $.extend function for every element in the jQuery stack (above), where the example below only does this once and then works on the stack. To test it I created two plugins, using both patterns, which applied styles to about 5000 li elements and the code below took about 1 second whereas the pattern above took about 1.3 seconds. (function($){ $.fn.formatLink = function(options){ var options = $.extend({}, $.fn.formatLink.defaultOptions, options || {}); return this.each(function(){ //code here }); }); $.fn.formatLink.defaultOptions ={} })(jQuery);

    Read the article

  • Will this cause a problem with different runtimes with DLL?

    - by Milo
    My gui application supports polymorphic timed events so that means that the user calls new, and the gui calls delete. This can create a problem if the runtimes are incompatible. So I was told a proposed solution would be this: class base; class Deallocator { void operator()(base* ptr) { delete ptr; } } class base { public: base(Deallocator dealloc) { m_deleteFunc = dealloc; } ~base() { m_deleteFunc(this); } private: Deallocator m_deleteFunc; } int main { Deallocator deletefunc; base baseObj(deletefunc); } While this is a good solution, it does demand that the user create a Deallocator object which I do not want. I was however wondering if I provided a Deallocator to each derived class: eg class derived : public base { Deallocator dealloc; public: Derived() : base(dealloc); { } }; I think this still does not work though. The constraint is that: The addTimedEvent() function is part of the Widget class which is also in the dll, but it is instanced by the user. The other constraint is that some classes which derive from Widget call this function with their own timed event classes. Given that "he who called new must call delete" what could work given these constraints? Thanks

    Read the article

  • dynamic inheritance without touching classes

    - by Jasper
    I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes: Suppose you have the following classes: class Base; class Child : public Base; class Displayer { public: Displayer(Base* element); Displayer(Child* element); } Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child. Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way) object->createDisplayer(); virtual void Base::createDisplayer() { new Displayer(this); } virtual void Child::createDisplayer() { new Displayer(this); } This works, however, there is a problem with this: Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI. Am I missing something very obvious or am I trying something that is not possible?

    Read the article

  • calling a function from a set of overloads depending on the dynamic type of an object

    - by Jasper
    I feel like the answer to this question is really simple, but I really am having trouble finding it. So here goes: Suppose you have the following classes: class Base; class Child : public Base; class Displayer { public: Displayer(Base* element); Displayer(Child* element); } Additionally, I have a Base* object which might point to either an instance of the class Base or an instance of the class Child. Now I want to create a Displayer based on the element pointed to by object, however, I want to pick the right version of the constructor. As I currently have it, this would accomplish just that (I am being a bit fuzzy with my C++ here, but I think this the clearest way) object->createDisplayer(); virtual void Base::createDisplayer() { new Displayer(this); } virtual void Child::createDisplayer() { new Displayer(this); } This works, however, there is a problem with this: Base and Child are part of the application system, while Displayer is part of the GUI system. I want to build the GUI system independently of the Application system, so that it is easy to replace the GUI. This means that Base and Child should not know about Displayer. However, I do not know how I can achieve this without letting the Application classes know about the GUI. Am I missing something very obvious or am I trying something that is not possible?

    Read the article

  • Behind ASP.NET MVC Mock Objects

    - by imran_ku07
       Introduction:           I think this sentence now become very familiar to ASP.NET MVC developers that "ASP.NET MVC is designed with testability in mind". But what ASP.NET MVC team did for making applications build with ASP.NET MVC become easily testable? Understanding this is also very important because it gives you some help when designing custom classes. So in this article i will discuss some abstract classes provided by ASP.NET MVC team for the various ASP.NET intrinsic objects, including HttpContext, HttpRequest, and HttpResponse for making these objects as testable. I will also discuss that why it is hard and difficult to test ASP.NET Web Forms.      Description:           Starting from Classic ASP to ASP.NET MVC, ASP.NET Intrinsic objects is extensively used in all form of web application. They provide information about Request, Response, Server, Application and so on. But ASP.NET MVC uses these intrinsic objects in some abstract manner. The reason for this abstraction is to make your application testable. So let see the abstraction.           As we know that ASP.NET MVC uses the same runtime engine as ASP.NET Web Form uses, therefore the first receiver of the request after IIS and aspnet_filter.dll is aspnet_isapi.dll. This will start the application domain. With the application domain up and running, ASP.NET does some initialization and after some initialization it will call Application_Start if it is defined. Then the normal HTTP pipeline event handlers will be executed including both HTTP Modules and global.asax event handlers. One of the HTTP Module is registered by ASP.NET MVC is UrlRoutingModule. The purpose of this module is to match a route defined in global.asax. Every matched route must have IRouteHandler. In default case this is MvcRouteHandler which is responsible for determining the HTTP Handler which returns MvcHandler (which is derived from IHttpHandler). In simple words, Route has MvcRouteHandler which returns MvcHandler which is the IHttpHandler of current request. In between HTTP pipeline events the handler of ASP.NET MVC, MvcHandler.ProcessRequest will be executed and shown as given below,          void IHttpHandler.ProcessRequest(HttpContext context)          {                    this.ProcessRequest(context);          }          protected virtual void ProcessRequest(HttpContext context)          {                    // HttpContextWrapper inherits from HttpContextBase                    HttpContextBase ctxBase = new HttpContextWrapper(context);                    this.ProcessRequest(ctxBase);          }          protected internal virtual void ProcessRequest(HttpContextBase ctxBase)          {                    . . .          }             HttpContextBase is the base class. HttpContextWrapper inherits from HttpContextBase, which is the parent class that include information about a single HTTP request. This is what ASP.NET MVC team did, just wrap old instrinsic HttpContext into HttpContextWrapper object and provide opportunity for other framework to provide their own implementation of HttpContextBase. For example           public class MockHttpContext : HttpContextBase          {                    . . .          }                     As you can see, it is very easy to create your own HttpContext. That's what did the third party mock frameworks like TypeMock, Moq, RhinoMocks, or NMock2 to provide their own implementation of ASP.NET instrinsic objects classes.           The key point to note here is the types of ASP.NET instrinsic objects. In ASP.NET Web Form and ASP.NET MVC. For example in ASP.NET Web Form the type of Request object is HttpRequest (which is sealed) and in ASP.NET MVC the type of Request object is HttpRequestBase. This is one of the reason that makes test in ASP.NET WebForm is difficult. because their is no base class and the HttpRequest class is sealed, therefore it cannot act as a base class to others. On the other side ASP.NET MVC always uses a base class to give a chance to third parties and unit test frameworks to create thier own implementation ASP.NET instrinsic object.           Therefore we can say that in ASP.NET MVC, instrinsic objects are of type base classes (for example HttpContextBase) .Actually these base classes had it's own implementation of same interface as the intrinsic objects it abstracts. It includes only virtual members which simply throws an exception. ASP.NET MVC also provides the corresponding wrapper classes (for example, HttpRequestWrapper) which provides a concrete implementation of the base classes in the form of ASP.NET intrinsic object. Other wrapper classes may be defined by third parties in the form of a mock object for testing purpose.           So we can say that a Request object in ASP.NET MVC may be HttpRequestWrapper or may be MockRequestWrapper(assuming that MockRequestWrapper class is used for testing purpose). Here is list of ASP.NET instrinsic and their implementation in ASP.NET MVC in the form of base and wrapper classes. Base Class Wrapper Class ASP.NET Intrinsic Object Description HttpApplicationStateBase HttpApplicationStateWrapper Application HttpApplicationStateBase abstracts the intrinsic Application object HttpBrowserCapabilitiesBase HttpBrowserCapabilitiesWrapper HttpBrowserCapabilities HttpBrowserCapabilitiesBase abstracts the HttpBrowserCapabilities class HttpCachePolicyBase HttpCachePolicyWrapper HttpCachePolicy HttpCachePolicyBase abstracts the HttpCachePolicy class HttpContextBase HttpContextWrapper HttpContext HttpContextBase abstracts the intrinsic HttpContext object HttpFileCollectionBase HttpFileCollectionWrapper HttpFileCollection HttpFileCollectionBase abstracts the HttpFileCollection class HttpPostedFileBase HttpPostedFileWrapper HttpPostedFile HttpPostedFileBase abstracts the HttpPostedFile class HttpRequestBase HttpRequestWrapper Request HttpRequestBase abstracts the intrinsic Request object HttpResponseBase HttpResponseWrapper Response HttpResponseBase abstracts the intrinsic Response object HttpServerUtilityBase HttpServerUtilityWrapper Server HttpServerUtilityBase abstracts the intrinsic Server object HttpSessionStateBase HttpSessionStateWrapper Session HttpSessionStateBase abstracts the intrinsic Session object HttpStaticObjectsCollectionBase HttpStaticObjectsCollectionWrapper HttpStaticObjectsCollection HttpStaticObjectsCollectionBase abstracts the HttpStaticObjectsCollection class      Summary:           ASP.NET MVC provides a set of abstract classes for ASP.NET instrinsic objects in the form of base classes, allowing someone to create their own implementation. In addition, ASP.NET MVC also provide set of concrete classes in the form of wrapper classes. This design really makes application easier to test and even application may replace concrete implementation with thier own implementation, which makes ASP.NET MVC very flexable.

    Read the article

  • Perform Unit Conversions with the Windows 7 Calculator

    - by Matthew Guay
    Want to easily convert area, volume, temperature, and many other units?  With the Calculator in Windows 7, it’s easy to convert most any unit into another. The New Calculator in Windows 7 Calculator received a visual overhaul in Windows 7, but at first glance it doesn’t seem to have any new functionality.  Here’s Windows 7’s Calculator on the left, with Vista’s calculator on the right.   But, looks can be deceiving.  Window’s 7’s calculator has lots of new exciting features.  Let’s try them out.  Simply type Calculator in the start menu search. To uncover the new features, click the View menu.  Here you can select many different modes, including Unit Conversion mode which we will look at. When you select the Unit Conversion mode, the Calculator will expand with a form on the left side. This conversions pane has 3 drop-down menus.  From the top one, select the type of unit you want to convert. In the next two menus, select which values you wish to convert to and from.  For instance, here we selected Temperature in the first menu, Degrees Fahrenheit in the second menu, and Degrees Celsius in the third menu. Enter the value you wish to convert in the From box, and the conversion will automatically appear in the bottom box. The Calculator contains dozens of conversion values, including more uncommon ones.  So if you’ve ever wanted to know how many US gallons are in a UK gallon, or how many knots a supersonic jet travels in an hour, this is a great tool for you!   Conclusion Windows 7 is filled with little changes that give you an all-around better experience in Windows to help you work more efficiently and productively.  With the new features in the Calculator, you just might feel a little smarter, too! Similar Articles Productive Geek Tips Add Windows Calculator to the Excel 2007 Quick Launch ToolbarEnjoy Quick & Easy Unit Conversion with Convert for WindowsCalculate with Qalculate on LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Get stats on your Ruby on Rails code TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Puppet inheritance of parametrized classes

    - by paweloque
    I have a situation in puppet where I want to inherit from a parametrized class: class base ($basepath) { ... } class extends_base ($ext_param) inherits base { ... } Now trying to instantiate the extends_base class I get the following error message: Must pass basepath to Class[Base] However, I don't see a way how to pass the basepath parameter to the Base class.. I tried to pass the param in the Class[Extends_base] definition, puppet doesn't like this either.

    Read the article

  • Windows App. Thread Aborting Issue

    - by Patrick
    I'm working on an application that has to make specific decisions based on files that are placed into a folder being watched by a file watcher. Part of this decision making process involves renaming files before moving them off to another folder to be processed. Since I'm working with files of all different sizes I created an object that checks the file in a seperate thread to verify that it is "available" and when it is it fires an event. When I run the rename code from inside this available event it works. public void RenameFile_Test() { string psFilePath = @"C:\File1.xlsx"; tgt_File target = new FileObject(psFilePath); target.FileAvailable += new FileEventHandler(OnFileAvailable); target.FileUnAvailable += new FileEventHandler(OnFileUnavailable); } private void OnFileAvailable(object source, FileEventArgs e) { ((FileObject)source).RenameFile(@"C:\File2.xlsx"); } The problem I'm running into is that when the extensions are different from the source file and the rename to file I am making a call to a conversion factory that returns a factory object based on the type of conversion and then converts the file accordingly before doing the rename. When I run that particular piece of code in unit test it works, the factory object is returned, and the conversion happens correctly. But when I run it within the process I get up to the... moExcelApp = new Application(); part of converting an .xls or .xlsx to a .csv and i get a "Thread was being Aborted" error. Any thoughts? Update: There is a bit more information and a bit of map of how the application works currently. Client Application running FSW On File Created event Creates a FileObject passing in the path of the file. On construction the file is validated: if file exists is true then, Thread toAvailableCheck = new Thread(new ThreadStart(AvailableCheck)); toAvailableCheck.Start(); The AvailableCheck Method repeatedly tries to open a streamreader to the file until the reader is either created or the number of attempts times out. If the reader is opened, it fires the FileAvailable event, if not it fires the FileUnAvailable event, passing back itself in the event. The client application is wired to catch those events from inside the Oncreated event of the FSW. the OnFileAvailable method then calls the rename functionality which contains the excel interop call. If the file is being renamed (not converted, extensions stay the same) it does a move to change the name from the old file name to the new, and if its a conversion it runs a conversion factory object which returns the correct type of conversion based on the extensions of the source file and the destination file name. If it is a simple rename it works w/o a problem. If its a conversion (which is the XLS to CSV object that is returned as a part of the factory) the very first thing it does is create a new application object. That is where the application bombs. When i test the factory and conversion/rename process outside of the thread and in its own unit test the process works w/o a problem. Update: I tested the Excel Interop inside a thread by doing this: [TestMethod()] public void ExcelInteropTest() { Thread toExcelInteropThreadTest = new Thread(new ThreadStart(Instantiate_App)); toExcelInteropThreadTest.Start(); } private void Instantiate_App() { Application moExcelApp = new Application(); moExcelApp.Quit(); } And on the line where the application is instatntiated I got the 'A first chance exception of type 'System.Threading.ThreadAbortException' error. So I added; toExcelInteropThreadTest.SetApartmentState(ApartmentState.MTA); after the thread instantiation and before the thread start call and still got the same error. I'm getting the notion that I'm going to have to reconsider the design.

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Execution plan warnings–All that glitters is not gold

    - by Dave Ballantyne
    In a previous post, I showed you the new execution plan warnings related to implicit and explicit warnings.  Pretty much as soon as i hit ’post’,  I noticed something rather odd happening. This statement : select top(10) SalesOrderHeader.SalesOrderID, SalesOrderNumberfrom Sales.SalesOrderHeaderjoin Sales.SalesOrderDetail on SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID   Throws the “Type conversion may affect cardinality estimation” warning.     Ive done no such conversion in my statement why would that be ?  Well, SalesOrderNumber is a computed column , “(isnull(N'SO'+CONVERT([nvarchar](23),[SalesOrderID],0),N'*** ERROR ***'))”,  so thats where the conversion is.   Wait!!! Am i saying that every type conversion will throw the warning ?  Thankfully, no.  It only appears for columns that are used in predicates ,even if the predicate / join condition is fine ,  and the column is indexed ( and/or , presumably has statistics).    Hopefully , this wont lead to to many wild goose chases, but is definitely something to bear in mind.  If you want to see this fixed then upvote my connect item here.

    Read the article

  • Delphi 2009 MS Build headaches

    - by X-Ray
    does anyone know of any good description of delphi's build system? (i know it's using MS Build.) i'm using delphi 2009. i wanted to set up a variation of the Debug build configuration that (often) has different defines (d2009 seems to call them "preprocessor symbols"). the problem i'm having is that--even though i turned off "inherit" for "Base" and "Debug"--have only very limited control. for example, i can't get rid of FastMM_. <PropertyGroup> <ProjectGuid>{D7FE7347-8E2C-438C-A275-38B8DA9244B0}</ProjectGuid> <ProjectVersion>12.0</ProjectVersion> <MainSource>oca.dpr</MainSource> <Config Condition="'$(Config)'==''">Debug</Config> <DCC_DCCCompiler>DCC32</DCC_DCCCompiler> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Base' or '$(Base)'!=''"> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Release' or '$(Cfg_1)'!=''"> <Cfg_1>true</Cfg_1> <CfgParent>Base</CfgParent> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Debug' or '$(Cfg_2)'!=''"> <Cfg_2>true</Cfg_2> <CfgParent>Base</CfgParent> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Base)'!=''"> <DCC_StringChecks>off</DCC_StringChecks> <DCC_MinimumEnumSize>4</DCC_MinimumEnumSize> <DCC_RangeChecking>true</DCC_RangeChecking> <DCC_IntegerOverflowCheck>true</DCC_IntegerOverflowCheck> <DCC_UNIT_PLATFORM>false</DCC_UNIT_PLATFORM> <DCC_SYMBOL_PLATFORM>false</DCC_SYMBOL_PLATFORM> <DCC_DcuOutput>.\dcu</DCC_DcuOutput> <DCC_UnitSearchPath>C:\Prj\Lib\AutoQADocking\Delphi2009.Win32\Lib;$(BDS)\Source\DUnit\src;$(DCC_UnitSearchPath)</DCC_UnitSearchPath> <DCC_Optimize>false</DCC_Optimize> <DCC_DependencyCheckOutputName>oca.exe</DCC_DependencyCheckOutputName> <DCC_ImageBase>00400000</DCC_ImageBase> <DCC_UnitAlias>WinTypes=Windows;WinProcs=Windows;DbiTypes=BDE;DbiProcs=BDE;DbiErrs=BDE;$(DCC_UnitAlias)</DCC_UnitAlias> <DCC_Platform>x86</DCC_Platform> <DCC_E>false</DCC_E> <DCC_N>false</DCC_N> <DCC_S>false</DCC_S> <DCC_F>false</DCC_F> <DCC_K>false</DCC_K> </PropertyGroup> <PropertyGroup Condition="'$(Cfg_1)'!=''"> <DCC_PentiumSafeDivide>true</DCC_PentiumSafeDivide> <DCC_Optimize>true</DCC_Optimize> <DCC_IntegerOverflowCheck>false</DCC_IntegerOverflowCheck> <BRCC_Defines>MadExcept;FastMM;$(BRCC_Defines)</BRCC_Defines> <DCC_AssertionsAtRuntime>false</DCC_AssertionsAtRuntime> <DCC_LocalDebugSymbols>false</DCC_LocalDebugSymbols> <DCC_Define>RELEASE;$(DCC_Define)</DCC_Define> <DCC_SymbolReferenceInfo>0</DCC_SymbolReferenceInfo> <DCC_DebugInformation>false</DCC_DebugInformation> </PropertyGroup> <PropertyGroup Condition="'$(Cfg_2)'!=''"> <DCC_DebugInfoInExe>true</DCC_DebugInfoInExe> <BRCC_Defines>FastMM</BRCC_Defines> <DCC_DebugDCUs>true</DCC_DebugDCUs> <DCC_MapFile>3</DCC_MapFile> <DCC_Define>DEBUG;FastMM_;madExcept;$(DCC_Define)</DCC_Define> </PropertyGroup> i even had to edit it today with notepad to get rid of a DCC define that the delphi UI doesn't seem to give access to. (it said "From Delphi Compiler" for the item i couldn't remove.) does anyone know a good primer on the use of this feature in delphi? thank you!

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >