Search Results

Search found 7239 results on 290 pages for 'john michael zorko'.

Page 45/290 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Delayed Durability–I start to like it!

    - by Michael Zilberstein
    In my previous post about the subject I’ve complained that according to BOL , this feature is enabled for Hekaton only. Panagiotis Antonopoulos from Microsoft commented that actually BOL is wrong – delayed durability can be used with all sorts of transactions, not just In-Memory ones. There is a database-level setting for delayed durability: default value is “Disabled”, other two options are “Allowed” and “Forced”. We’ll switch between “Disabled” and “Forced” and measure IO generated by a simple...(read more)

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Announcing Upcoming SOA and JMS Introductory Blog Posts

    - by John-Brown.Evans
    Announcing Upcoming SOA and JMS Introductory Blog Posts Beginning next week, SOA Proactive Support will begin posting a series of introductory blogs here on working with JMS in a SOA context. The posts will begin with how to set up JMS in WebLogic server, lead you through reading and writing to a JMS queue from the WLS Java samples, continue with how to access it from a SOA composite and, finally, describe how to set up and access AQ JMS (Advanced Queuing JMS) from a SOA/BPEL process. The posts will be of a tutorial nature and include step-by-step examples. Your questions and feedback are encouraged. The following topics are planned: How to Create a Simple JMS Queue in Weblogic Server 11g Using the QueueSend.java Sample Program to Send a Message to a JMS Queue Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes How to Write to an AQ JMS Queue from a BPEL Process How to Read from an AQ JMS Queue from a BPEL Process

    Read the article

  • Batch file to Delete Old Virtual Directories.

    - by Michael Freidgeim
    On some servers we have many old Virtual Directories created for previous versions of our application. IIS user interface allows to delete only one in a time. Fortunately we can use IIS scripts as described in How to manage Web sites and Web virtual directories by using command-line scripts in IIS 6.0 I've created batch file DeleteOldVDirs.cmd rem http://support.microsoft.com/kb/816568 rem syntax: iisvdir /delete WebSite [/Virtual Path]Name [/s Computer [/u [Domain\]User /p Password]] REM list all directories and create batch of deletes iisvdir /query "Default Web Site" echo "Enter Ctrl-C  if you want to stop deleting" Pause iisvdir /delete "Default Web Site/VDirName1" iisvdir /delete "Default Web Site/VDirName2" ...   If the name of WebSite or Virtual directory contain spaces(e.g  "Default Web Site"), don't forget to use double quotes. Note that the batch doesn't delete physical directories from flie system.You need to delete them using Windows Explorer, but it does support multiple selection!

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • Opensource showcase for MVC in Java Swing

    - by Regular John
    I've allready created small desktop CRUD applications using Java/Swing. In hindsight I'm not quite sure if the overall design of these applications is good. I've also done some reading on MVC and looked at different Swing-tutorials. My problem is, that I've got a very theroatical knowledge of MVC and on the other hand, most Swing-resources don't implement the MVC-pattern. Now I would like to get my hands dirty and see how MVC is implemented in Swing in a real-world-application. Are there any opensource project you could recommend? It would be also interesting to have more than one project, to see different approaches. Best fit would be a software, that uses a relational database in the backend, to see an overall design, that I can compare to my former applications.

    Read the article

  • Versioning and Continuous Integration with project settings files

    - by Michael Stephenson
    I came across something which was a bit of a pain in the bottom the other week. Our scenario was that we had implemented a helper style assembly which had some custom configuration implemented through the project settings. I'm sure most of you are familiar with this where you end up with a settings file which is viewable through the C# project file and you can configure some basic settings. The settings are embedded in the assembly during compilation to be part of a DefaultValue attribute. You have the ability to override the settings by adding information to your app.config and if the app.config doesn’t override the settings then the embedded default is used. All normal C# stuff so far… Where our pain started was when we implement Continuous Integration and we wanted to version all of this from our build. What I was finding was that the assembly was versioned fine but the embedded default value was maintaining the non CI build version number. I ended up getting this to work by using a build task to change the version numbers in the following files: App.config Settings.settings Settings.designer.cs I think I probably could have got away with just the settings.designer.cs, but wanted to keep them all consistent incase we had to look at the code on the build server for some reason. I think the reason this was painful was because the settings.designer.cs is only updated through Visual Studio and it writes out the code to this file including the DefaultValue attribute when the project is saved rather than as part of the compilation process. The compile just compiles the already existing C# file. As I said we got it working, and it was a bit of a pain. If anyone has a better solution for this I'd love to hear it

    Read the article

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • rails fake data, considering switch from faker to forgery, any advantages or pitfalls?

    - by Michael Durrant
    With Ruby on Rails I've usually used Forgery for generating dummy data for testing. I've noticed recently that several clients and tutorials are using Faker They both seem fairly similar in use and popularity: Faker 128 forks, 418 watchers. Forgery 59 forks, 399 watchers. They both seem similar in how current they are: Faker Most updates are from 6 and 9 months ago. Forgery Most updates are from 4 and 9 months ago. The one distinguishing factor I've found so far is that Forgery seems like it has better instructions. Are there any particular benefits or disadvantages to using one over the other? Have you ever needed to switch from one to another for a particular reason?

    Read the article

  • Issues with ILMerge, Lambda Expressions and VS2010 merging?

    - by John Blumenauer
    A little Background For quite some time now, it’s been possible to merge multiple .NET assemblies into a single assembly using ILMerge in Visual Studio 2008.  This is especially helpful when writing wrapper assemblies for 3rd-party libraries where it’s desirable to minimize the number of assemblies for distribution.  During the merge process, ILMerge will take a set of assemblies and merge them into a single assembly.  The resulting assembly can be either an executable or a DLL and is identified as the primary assembly. Issue During a recent project, I discovered using ILMerge to merge assemblies containing lambda expressions in Visual Studio 2010 is resulting in invalid primary assemblies.  The code below is not where the initial issue was identified, I will merely use it to illustrate the problem at hand. In order to describe the issue, I created a console application and a class library for calculating a few math functions utilizing lambda expressions.  The code is available for download at the bottom of this blog entry. MathLib.cs using System; namespace MathLib { public static class MathHelpers { public static Func<double, double, double> Hypotenuse = (x, y) => Math.Sqrt(x * x + y * y); static readonly Func<int, int, bool> divisibleBy = (int a, int b) => a % b == 0; public static bool IsPrimeNumber(int x) { { for (int i = 2; i <= x / 2; i++) if (divisibleBy(x, i)) return false; return true; }; } } } Program.cs using System; using MathLib; namespace ILMergeLambdasConsole { class Program { static void Main(string[] args) { int n = 19; if (MathHelpers.IsPrimeNumber(n)) { Console.WriteLine(n + " is prime"); } else { Console.WriteLine(n + " is not prime"); } Console.ReadLine(); } } } Not surprisingly, the preceding code compiles, builds and executes without error prior to running the ILMerge tool.   ILMerge Setup In order to utilize ILMerge, the following changes were made to the project. The MathLib.dll assembly was built in release configuration and copied to the MathLib folder.  The following folder hierarchy was used for this example:   The project file for ILMergeLambdasConsole project file was edited to add the ILMerge post-build configuration.  The following lines were added near the bottom of the project file:  <Target Name="AfterBuild" Condition="'$(Configuration)' == 'Release'"> <Exec Command="&quot;..\..\lib\ILMerge\Ilmerge.exe&quot; /ndebug /out:@(MainAssembly) &quot;@(IntermediateAssembly)&quot; @(ReferenceCopyLocalPaths->'&quot;%(FullPath)&quot;', ' ')" /> <Delete Files="@(ReferenceCopyLocalPaths->'$(OutDir)%(DestinationSubDirectory)%(Filename)%(Extension)')" /> </Target> The ILMergeLambdasConsole project was modified to reference the MathLib.dll located in the MathLib folder above. ILMerge and ILMerge.exe.config was copied into the ILMerge folder shown above.  The contents of ILMerge.exe.config are: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> </startup> </configuration> Post-ILMerge After compiling and building, the MathLib.dll assembly will be merged into the ILMergeLambdasConsole executable.  Unfortunately, executing ILMergeLambdasConsole.exe now results in a crash.  The ILMerge documentation recommends using PEVerify.exe to validate assemblies after merging.  Executing PEVerify.exe against the ILMergeLambdasConsole.exe assembly results in the following error:    Further investigation by using Reflector reveals the divisibleBy method in the MathHelpers class looks a bit questionable after the merge.     Prior to using ILMerge, the same divisibleBy method appeared as the following in Reflector: It’s pretty obvious something has gone awry during the merge process.  However, this is only occurring when building within the Visual Studio 2010 environment.  The same code and configuration built within Visual Studio 2008 executes fine.  I’m still investigating the issue.  If anyone has already experienced this situation and solved it, I would love to hear from you.  However, as of right now, it looks like something has gone terribly wrong when executing ILMerge against assemblies containing Lambdas in Visual Studio 2010. Solution Files ILMergeLambdaExpression

    Read the article

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • Develop website locally and push updates on Remote Server using Git

    - by John
    Together with a friend we are looking to develop a website (using Symfony2). We are on a Shared Hosting with SSH access. Below is the environment we'd like to setup: * Use git as Version Control (we are new to Git) * Share the tasks and develop on our local machines * Push the updates onto the remote server Here's our initial thoughts on how to do it (assuming Git is already running both locally and remotely): * Install Symfony on the Remote Server (basic setup) * Get a clone (using Git) of the project locally * Develop project locally and push updates (using Git) on the remote server Does this approach make sense, if not, any recommendations? Thanks

    Read the article

  • Allow user to SUDO a script without password.

    - by John Isaacks
    I have a php script with this: <?php #echo exec('whoami'); $output = shell_exec('bash /usr/local/svn/bash_repo/make-live'); echo "$output"; ?> The make-live script contains this: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ echo "Head revision has been pushed to live server" So the PHP user who is www-data needs to have nopasswd for that script. I am told I need to add: www-data ALL=NOPASSWD: /usr/local/svn/bash_repo/make-live To sudoers to allow this. First I run sudo visudo but I have no experience with vi so I try to open it in gedit with export EDITOR=gedit && sudo -E visudo which then just opens a sudoers.tmp file which is empty. I add the line and save it. But it doesn't do save. So I just try sudo visudo and I add the line right beneath this part: # User privilege specification root ALL=(ALL) ALL www-data ALL=NOPASSWD: /usr/local/svn/bash_repo/make-live I closed out sudoers and reopened to verify that it has saved. I even restart apache. I run the php file and it still doesn't work. What am I missing?

    Read the article

  • User Produtivity Kit - Powerful Packages (Part 1)

    - by [email protected]
    User Productivity Kit provides the ability to create a variety of content types including robust topics on system process and web pages with formatted text and graphics. There are times when you want to enhance content with media types not naively created by User Productivity Kit, media types such as video, custom animations, forms, and more. One method of doing this is to maintain these media files on a web server - separate from the User Productivity Kit player content and link to the files using absolute URLs such as http://myserver/overview.html. While this will get you going, you won't benefit from the content management capabilities of the UPK Developer. Features such as check-in / check-out, history, document properties, folder permissions and more are not available to this external content. Further, if you ever need to move that content to a server with a different name or domain, you'd need to update all your links. UPK version 3.1 introduced a new document type - the package. A package is a group of folders and files that you manage in the Developer library as a single document. These package documents work in the same manner as any other document in the library and you can use all of the collaborative content development features you see with other document types. Packages can be used for anything from single Word documents, PDF files, and graphics to more intricate sets of inter-related files commonly seen with HTML files and their graphics, style sheets, and JavaScript files. The structure of the files and folders within a package will always be preserved so this means that any relative links between files in the package will work. For example, an HTML file containing an image tag with a relative link to a graphic elsewhere in the same package will continue to function properly both when viewed in the Developer and when published to outputs such as the UPK Player. Once you start to use packages, you'll soon discover that there is a lot of existing content that can be re-purposed by placing it into UPK packages. Packages are easily created by selecting File...New...Package. Files can be added in a number of ways including the "Add Files" button, copy & paste from Windows Explorer, and drag & drop. To use one of the files in the package, just create a link to the file in the package you want to target. This is supported throughout the Developer in places such as section & topic concepts, frame links and hyperlinks in web pages. A little more challenging is determining how to structure packages in your library. As I mentioned earlier, a package can contain anything from a single file to dozens of files and folders. So what should you do? You could create a package for each file. You could create one package for all your files. But which one is right? Well, there's not a right and wrong answer to this question. There are advantages and disadvantages to each. The right decision will be influenced by the package files themselves, the structure of the content in the library, the size and working style of the development team, how content is shared between different outlines and more. The first consideration can be assessed the quickest. If the content to be placed in the package is composed of multiple files and those files reference each other, they should be in the same package. There are loads of examples of this type of content. HTML files with graphics and style sheets, HTML files with embedded Flash movies, and Word documents saved as HTML are all examples where the content is composed of multiple files and the files reference each other in some way. Content like this should always be placed in a singe package such that these relative links between the files are preserved and play properly in the UPK Player. In upcoming posts, I'll explain additional considerations.

    Read the article

  • DataCash @ Hackathon

    - by John Breakwell
    Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2013/06/28/datacash--hackathon.aspxBack in May, DataCash was a sponsor for one of the biggest networking events for payments developers – Trans-hacktion. The 3-day Hackathon, organised by Birdback, was focused on the latest innovations in the payments and financial technology and held at the London Google Campus.  The event included demos from DataCash and other payments companies followed by hacking sessions. Teams had to hack a product that used partner APIs and present the hack in 3 minutes on the final day. The prizes up for grabs were: KingHacker3D Printer & Champagne 1stPebble Watch & 1 year of GitHub Silver plan 2ndAIAIAI Headphones & 1 year of GitHub Bronze plan 3rdRaspberry Pi & 6 months of GitHub Bronze plan APIUp Bracelet. Nintendo NES + Super Mario Game ANDBerg Cloud Little Printer & 100$ AWS credit & more...

    Read the article

  • PostSharp deployment to build machine- use Setup installation, not NuGet package.

    - by Michael Freidgeim
    PostSharp has well documented different methods of installation. I've chosen installing NuGet packages, because according to  Deploying PostSharp into a Source Repository NuGet is the easiest way to add PostSharp to a project without installing the product on every machine. However it didn't work well for me. I've added PostSharp NuGet package to one project in the solution.  When I wanted to use PostSharp in other project, Visual Studio tab showed that PostSharp is not enabled for this project I've added the NuGet package to the new project, which installed a new version of the package in the new Packages subfolder. When I wanted to refer PostSharp from the third project, I've ended up with another version of PostSharp installed. Additionally multiple versions of Diagnostics were created. It definitely causes confusion and errors.   More problems we experienced on build server. According to Using PostSharp on a Build Server "If you chose to deploy PostSharp in the source repository, it does not need to be installed specifically on the build server. " It didn't work on our build server. I kept getting errors "The "AddIns" parameter is not supported by the "PostSharp21" task." and "The "DisableSystemBindingPolicies" parameter is not supported by the "PostSharp21" task."   From my experience the only way to have the latest version of PostSharp working on the build server is to install it using Setup as described in Deploying PostSharp with the Setup Program     Gael acknowledged the issues with possible version conflicts. see http://support.sharpcrafters.com/discussions/problems/388-the-postsharp21-task-failed-unexpectedly

    Read the article

  • Azure Service Bus - Authorization failure

    - by Michael Stephenson
    I fell into this trap earlier in the week with a mistake I made when configuring a service to send and listen on the azure service bus and I thought it would be worth a little note for future reference as I didnt find anything online about it.  After configuring everything when I ran my code sample I was getting the below error. WebHost failed to process a request.Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/28316044Exception: System.ServiceModel.ServiceActivationException: The service '/-------/BrokeredMessageService.svc' cannot be activated due to an exception during compilation.  The exception message is: Generic: There was an authorization failure. Make sure you have specified the correct SharedSecret, SimpleWebToken or Saml transport client credentials.. ---> Microsoft.ServiceBus.AuthorizationFailedException: Generic: There was an authorization failure. Make sure you have specified the correct SharedSecret, SimpleWebToken or Saml transport client credentials.   at Microsoft.ServiceBus.RelayedOnewayTcpClient.ConnectRequestReplyContext.Send(Message message, TimeSpan timeout, IDuplexChannel& channel)   at Microsoft.ServiceBus.RelayedOnewayTcpListener.RelayedOnewayTcpListenerClient.Connect(TimeSpan timeout)   at Microsoft.ServiceBus.RelayedOnewayTcpClient.EnsureConnected(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.RefcountedCommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.RelayedOnewayChannelListener.OnOpen(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.SocketConnectionTransportManager.OnOpen(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.TransportManager.Open(TimeSpan timeout, TransportChannelListener channelListener)   at Microsoft.ServiceBus.Channels.TransportManagerContainer.Open(TimeSpan timeout, SelectTransportManagersCallback selectTransportManagerCallback)   at Microsoft.ServiceBus.SocketConnectionChannelListener`2.OnOpen(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at Microsoft.ServiceBus.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout)   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)   at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath)   at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)   --- End of inner exception stack trace ---   at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)   at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath)Process Name: w3wpProcess ID: 8056As recommended by the error message I checked everything about the application configuration and also the keys and eventually I found the problem.When I set the permissions in the ACS rule group I had copied and pasted the claim name for net.windows.servicebus.action from the Azure portal and hadnt spotted the <space> character on the end of it like you sometimes pick up when copying text in the browser.  This meant that the listen and send permissions were not setup correctly which is why (as you would expect) my two applications could not connect to the service bus.So lesson learnt here, if you do copy and paste into the ACS rules just be careful you dont leave a space on the end of anything otherwise it will be difficult to spot that its configured incorrectly

    Read the article

  • Compiz Fusion And Unity Tool

    - by David Michael Rice
    I tried to install compiz fusion on ubuntu studio 13 and all I got was this Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: compiz-fusion-plugins-extra:i386 : Depends: compiz-core:i386 but it is not going to be installed Depends: compiz-fusion-plugins-main:i386 but it is not going to be installed Recommends: compizconfig-settings-manager:i386 but it is not going to be installed compiz-gnome : Depends: libcompizconfig0 but it is not going to be installed compizconfig-settings-manager : Depends: python-compizconfig (>= 1:0.9.9~daily13.04.18.1~13.04-0ubuntu1) but it is not going to be installed libcompizconfig-backend-gconf:i386 : Depends: libcompizconfig0:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. david@Nebuchadnezzar:~$ Also I installed Unity tweak tool through the software center, and now I cannot find it at all, not even in search apps. I feel like every time I try to install something I have to jump through flaming hoops...

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • Three Global Telecoms Soar With Siebel

    - by michael.seback
    Deutsche Telekom Group Selects Oracle's Siebel CRM to Underpin Next-Generation CRM Strategy The Deutsche Telekom Group (DTAG), one of the world's leading telecommunications companies, and a customer of Oracle since 2001, has invested in Oracle's Siebel CRM as the standard platform for its Next Generation CRM strategy; a move to lower the cost of managing its 120 million customers across its European businesses. Oracle's Siebel CRM is planned to be deployed in Germany and all of the company's European business within five years. "...Our Next-Generation strategy is a significant move to lower our operating costs and enhance customer service for all our European customers. Not only is Oracle underpinning this strategy, but is also shaping the way our company operates and sells to customers. We look forward to working with Oracle over the coming years as the technology is extended across Europe," said Dr. Steffen Roehn, CIO Deutsche Telekom AG... "The telecommunications industry is currently undergoing some major changes. As a result, companies like Deutsche Telekom are needing to be more intelligent about the way they use technology, particularly when it comes to customer service. Deutsche Telekom is a great example of how organisations can use CRM to not just improve services, but also drive more commercial opportunities through the ability to offer highly tailored offers, while the customer is engaged online or on the phone," said Steve Fearon, vice president CRM, EMEA Read more. Telecom Argentina S.A. Accelerates Time-to-Market for New Communications Products and Services Telecom Argentina S.A. offers basic telephone, urban landline, and national and international long-distance services...."With Oracle's Siebel CRM and Oracle Communication Billing and Revenue Management, we started a technological transformation that allows us to satisfy our critical business needs, such as improving customer service and quickly launching new phone and internet products and services." - Saba Gooley, Chief Information Officer, Wire Line and Internet Services, Telecom Argentina S.A.Read more. Türk Telekom Develops Benefits-Driven CRM Roadmap Türk Telekom Group provides integrated telecommunication services from public switched telephone network (PSTN) and global systems for mobile communications technology (GSM). to broadband internet...."Oracle Insight provided us with a structured deployment approach that makes sense for our business. It quantified the benefits of the CRM solution allowing us to engage with the relevant business owners; essential for a successful transformation program." - Paul Taylor, VP Commercial Transformation, Türk Telekom Read more.

    Read the article

  • How to account for speed of the vehicle when shooting shells from it?

    - by John Murdoch
    I'm developing a simple 3D ship game using libgdx and bullet. When a user taps the mouse I create a new shell object and send it in the direction of the mouse click. However, if the user has tapped the mouse in the direction where the ship is currently moving, the ship catches up to the shells very quickly and can sometimes even get hit by them - simply because the speed of shells and the ship are quite comparable. I think I need to account for ship speed when generating the initial impulse for the shells, and I tried doing that (see "new line added"), but I cannot figure out if what I'm doing is the proper way and if yes, how to calculate the correct coefficient. public void createShell(Vector3 origin, Vector3 direction, Vector3 platformVelocity, float velocity) { long shellId = System.currentTimeMillis(); // hack ShellState state = getState().createShellState(shellId, origin.x, origin.y, origin.z); ShellEntity entity = EntityFactory.getInstance().createShellEntity(shellId, state); add(entity); entity.getBody().applyCentralImpulse(platformVelocity.mul(velocity * 0.02f)); // new line added, to compensate for the moving platform, no idea how to calculate proper coefficient entity.getBody().applyCentralImpulse(direction.nor().mul(velocity)); } private final Vector3 v3 = new Vector3(); public void shootGun(Vector3 direction) { Vector3 shipVelocity = world.getShipEntities().get(id).getBody().getLinearVelocity(); world.getState().getShipStates().get(id).transform.getTranslation(v3); // current location of our ship v3.add(direction.nor().mul(10.0f)); // hack; this is to avoid shell immediately impacting the ship that it got shot out from world.createShell(v3, direction, shipVelocity, 500); }

    Read the article

  • Silverlight TV 23: MVP Q&A with WWW (Wildermuth, Wahlin and Ward)

    John interviews a panel of 3 well known Silverlight leaders including Shawn Wildermuth, Dan Wahlin, and Ward Bell at the Silverlight 4 launch event. The guest panel answers questions sent in from Twitter about the features in Silverlight 4, thoughts on MVVM, and the panel members' experiences developing Silverlight. This is a great chance to hear from some of the leading Silverlight minds. These guys are all experts at building business applications with Silverlight. Relevant links: John's Blog...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to Configure Name Servers using Webmin in Unmanaged VPS on Centos

    - by John
    I want to configure my site's name servers and all related stuff. I'm not able to find any good documention steps to do it straight-forwardly without understanding the nitty-gritty of this. I wish I could afford managed Vps I feel that I'm the odd one out looking for this documentation. I've followed doc at these places: http://www.webtop.com.au/blog/how-to-setup-dns-using-webmin-2009052848 , http://www.beer.org.uk/bsacdns and https://www.virtacoresupport.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=134

    Read the article

  • New Source Database Added for EBS 12 + 11gR2 Transportable Tablespaces

    - by John Abraham
    The Transportable Tablespaces (TTS) process was originally certified for the migration of E-Business Suite R12 databases going from a source database of 11gR1 or 11gR2 to a target of 11gR2. This requirement has now been expanded to include a source database of 10gR2 (10.2.0.5) - this will potentially save time for existing 10gR2 customers as they can remove on a crucial upgrade step prior to performing the platform migration. The migration process requires an updated Controlled patch delivered by the Oracle E-Business Suite Platform Engineering team, i.e. it requires a password obtainable from Oracle Support. We released the patch in this manner to gauge uptake, and help identify and monitor any customer issues due to the nature of this technology. This patch has been updated to now include supporting 10gR2 as a source database. Does it meet your requirements?Note that for migration across platforms of the same "endian" format, users are advised to use the Transportable Database (TDB) migration process instead for large databases. The "endian-ness" target platforms can be verified by querying the view V$DB_TRANSPORTABLE_PLATFORM using SQL*Plus (connected as sysdba) on the source platform:SQL>select platform_name from v$db_transportable_platform;If the intended target platform does not appear in the output, it means that it is of a different endian format from the source. Consequently. database migration will need to be performed via Transportable Tablespaces (for large databases) or export/import.The use of Transportable Tablespaces can greatly speed up the migration of the data portion of the database. However, it does not affect metadata, which must still be migrated using export/import. We recommend that users initially perform a test migration on their database, using export/import with the 'metrics=y' parameter. This will help identify the relative amounts of data and metadata, and provide a basis for assessing likely gains in timing. In general, the larger the amount of data (compared to metadata), the greater the reduction in downtime that can be expected from using TTS as a migration process. For smaller databases or for those that have relatively small data compared to metadata, TTS will not be as beneficial for cross endian migration and the use of export/import (datapump) for the whole database is recommended. Where can I find more information? Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 Enterprise Edition (My Oracle Support Document 1311487.1) Oracle Database Administrator's Guide 11g Release 2 (11.2) Related Articles Database Migration using 11gR2 Transportable Tablespaces Now Certified for EBS 12 New Source Databases Added for Transportable Tablespaces + EBS 11i 10gR2 Transportable Tablespaces Certified for EBS 11i Migrating E-Business Suite Release 11i Databases Between Platforms Migrating E-Business Suite Release 12 Databases Between Platforms

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >