Search Results

Search found 4116 results on 165 pages for 'baron throw'.

Page 16/165 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • About extension methods

    - by Srinivas Reddy Thatiparthy
    Shall i always need to throw ArgumentNullException(well,extension methods in Enumerable throw ArgumentNullException) when an extension method is called on null?I would like to have a clarification on this?If the answer is an Yes and No please present both the cases.

    Read the article

  • try .. catch blocks - when to use

    - by Konrad
    I have always been of the belief that if a method can throw an exception then it is reckless not to protect this call with a meaningful try block. I just posted 'You should ALWAYS wrap calls that can throw in try, catch blocks.' to this question and was told that it was 'remarkably bad advice' - I'd like to understand why. Thanks!

    Read the article

  • Accessing property of object vs variable in javascript

    - by Samuel
    Why when I try to access a variable that don't exist, javascript throw an exception but when I try to access a property that don't exist in an object, javascript returns an undefined object? For example, this case returns an undefined object: function Foo(){ console.log(this.bar); } Foo(); But, in this other example, javascript throw an exception: function Foo(){ console.log(bar); } Foo(); ReferenceError: bar is not defined

    Read the article

  • Can't insert a number into a C++ custom streambuf/ostream

    - by 0xbe5077ed
    I have written a custom std::basic_streambuf and std::basic_ostream because I want an output stream that I can get a JNI string from in a manner similar to how you can call std::ostringstream::str(). These classes are quite simple. namespace myns { class jni_utf16_streambuf : public std::basic_streambuf<char16_t> { JNIEnv * d_env; std::vector<char16_t> d_buf; virtual int_type overflow(int_type); public: jni_utf16_streambuf(JNIEnv *); jstring jstr() const; }; typedef std::basic_ostream<char16_t, std::char_traits<char16_t>> utf16_ostream; class jni_utf16_ostream : public utf16_ostream { jni_utf16_streambuf d_buf; public: jni_utf16_ostream(JNIEnv *); jstring jstr() const; }; // ... } // namespace myns In addition, I have made four overloads of operator<<, all in the same namespace: namespace myns { // ... utf16_ostream& operator<<(utf16_ostream&, jstring) throw(std::bad_cast); utf16_ostream& operator<<(utf16_ostream&, const char *); utf16_ostream& operator<<(utf16_ostream&, const jni_utf16_string_region&); jni_utf16_ostream& operator<<(jni_utf16_ostream&, jstring); // ... } // namespace myns The implementation of jni_utf16_streambuf::overflow(int_type) is trivial. It just doubles the buffer width, puts the requested character, and sets the base, put, and end pointers correctly. It is tested and I am quite sure it works. The jni_utf16_ostream works fine inserting unicode characters. For example, this works fine and results in the stream containing "hello, world": myns::jni_utf16_ostream o(env); o << u"hello, wor" << u'l' << u'd'; My problem is as soon as I try to insert an integer value, the stream's bad bit gets set, for example: myns::jni_utf16_ostream o(env); if (o.badbit()) throw "bad bit before"; // does not throw int32_t x(5); o << x; if (o.badbit()) throw "bad bit after"; // throws :( I don't understand why this is happening! Is there some other method on std::basic_streambuf I need to be implementing????

    Read the article

  • Proper exceptions to use for nulls

    - by user200295
    In the following example we have two different exceptions we want to communicate. //constructor public Main(string arg){ if(arg==null) throw new ArgumentNullException("arg"); Thing foo=GetFoo(arg); if(foo==null) throw new NullReferenceException("foo is null"); } Is this the proper approach for both exception types?

    Read the article

  • How could I know if an object is derived from a specific generic class?

    - by Edison Chuang
    Suppose that I have an object then how could I know if the object is derived from a specific generic class. For example: public class GenericClass<T> { } public bool IsDeriveFrom(object o) { return o.GetType().IsSubclassOf(typeof(GenericClass)); //will throw exception here } please notice that the code above will throw an exception. The type of the generic class cannot be retrieved directly because there is no type for a generic class without a type parameter provided.

    Read the article

  • Where to define exception classes, inside classes or on a higher level?

    - by rve
    Should exception classes be part of the class which may throw them or should they exist on a higher level? For example : class Test { public: class FooException: public ExceptionBase { }; void functionThrowingFooException(); }; or class FooException: public ExceptionBase { }; class Test { public: void functionThrowingFooException(); }; (functionThrowingFooException() is the only function to ever throw a FooException)

    Read the article

  • In the Enterprise Library, how does the abstract Validator.cs have a method definition?

    - by Soham
    Consider this piece of code: public abstract class Validator { protected Validator() { } protected abstract void ValidateCore(object instance, string value, IList<ValidationResult> results); public void Validate(object instance, string value, IList<ValidationResult> results) { if (null == instance) throw new ArgumentNullException("instance"); if (null == results) throw new ArgumentNullException("results"); ValidateCore(instance, value, results); } } Look at the Validate() overload, how can an abstract class have definitions like this?

    Read the article

  • exceptions in C++

    - by helloWorld
    I have some techniacal question, in this function: string report() const { if(list.begin() == list.end()){ throw "not good"; } //do something } if I throw exception what is going on with the program? Will my function terminate or it will run further? if it terminates, what value will it return?

    Read the article

  • Difference between methods with and without generics

    - by isakavis
    Can someone help me understand advantages and disadvantages (if any) between the following methods which do the same function of storing away the entity to azure (in my case)? public bool Save<T>(string tableName, T entity) where T : TableEntityBase, new() { throw new NotImplementedException(); } vs public bool Save(string tableName, TableEntityBase entity) { throw new NotImplementedException(); }

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Enterprise Library Logging / Exception handling and Postsharp

    - by subodhnpushpak
    One of my colleagues came-up with a unique situation where it was required to create log files based on the input file which is uploaded. For example if A.xml is uploaded, the corresponding log file should be A_log.txt. I am a strong believer that Logging / EH / caching are cross-cutting architecture aspects and should be least invasive to the business-logic written in enterprise application. I have been using Enterprise Library for logging / EH (i use to work with Avanade, so i have affection towards the library!! :D ). I have been also using excellent library called PostSharp for cross cutting aspect. Here i present a solution with and without PostSharp all in a unit test. Please see full source code at end of the this blog post. But first, we need to tweak the enterprise library so that the log files are created at runtime based on input given. Below is Custom trace listner which writes log into a given file extracted out of Logentry extendedProperties property. using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration; using Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners; using Microsoft.Practices.EnterpriseLibrary.Logging; using System.IO; using System.Text; using System; using System.Diagnostics;   namespace Subodh.Framework.Logging { [ConfigurationElementType(typeof(CustomTraceListenerData))] public class LogToFileTraceListener : CustomTraceListener {   private static object syncRoot = new object();   public override void TraceData(TraceEventCache eventCache, string source, TraceEventType eventType, int id, object data) {   if ((data is LogEntry) & this.Formatter != null) { WriteOutToLog(this.Formatter.Format((LogEntry)data), (LogEntry)data); } else { WriteOutToLog(data.ToString(), (LogEntry)data); } }   public override void Write(string message) { Debug.Print(message.ToString()); }   public override void WriteLine(string message) { Debug.Print(message.ToString()); }   private void WriteOutToLog(string BodyText, LogEntry logentry) { try { //Get the filelocation from the extended properties if (logentry.ExtendedProperties.ContainsKey("filelocation")) { string fullPath = Path.GetFullPath(logentry.ExtendedProperties["filelocation"].ToString());   //Create the directory where the log file is written to if it does not exist. DirectoryInfo directoryInfo = new DirectoryInfo(Path.GetDirectoryName(fullPath));   if (directoryInfo.Exists == false) { directoryInfo.Create(); }   //Lock the file to prevent another process from using this file //as data is being written to it.   lock (syncRoot) { using (FileStream fs = new FileStream(fullPath, FileMode.Append, FileAccess.Write, FileShare.Write, 4096, true)) { using (StreamWriter sw = new StreamWriter(fs, Encoding.UTF8)) { Log(BodyText, sw); sw.Close(); } fs.Close(); } } } } catch (Exception ex) { throw new LoggingException(ex.Message, ex); } }   /// <summary> /// Write message to named file /// </summary> public static void Log(string logMessage, TextWriter w) { w.WriteLine("{0}", logMessage); } } }   The above can be “plugged into” the code using below configuration <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="Trace" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.CustomTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Subodh.Framework.Logging.LogToFileTraceListener, Subodh.Framework.Logging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Subodh Custom Trace Listener" initializeData="" formatter="Text Formatter" /> </listeners> Similarly we can use PostSharp to expose the above as cross cutting aspects as below using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; using PostSharp.Laos; using System.Diagnostics; using GC.FrameworkServices.ExceptionHandler; using Subodh.Framework.Logging;   namespace Subodh.Framework.ExceptionHandling { [Serializable] public sealed class LogExceptionAttribute : OnExceptionAspect { private string prefix; private MethodFormatStrings formatStrings;   // This field is not serialized. It is used only at compile time. [NonSerialized] private readonly Type exceptionType; private string fileName;   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception flowing out of the methods to which /// the custom attribute is applied. /// </summary> public LogExceptionAttribute() { }   /// <summary> /// Declares a <see cref="XTraceExceptionAttribute"/> custom attribute /// that logs every exception derived from a given <see cref="Type"/> /// flowing out of the methods to which /// the custom attribute is applied. /// </summary> /// <param name="exceptionType"></param> public LogExceptionAttribute( Type exceptionType ) { this.exceptionType = exceptionType; }   public LogExceptionAttribute(Type exceptionType, string fileName) { this.exceptionType = exceptionType; this.fileName = fileName; }   /// <summary> /// Gets or sets the prefix string, printed before every trace message. /// </summary> /// <value> /// For instance <c>[Exception]</c>. /// </value> public string Prefix { get { return this.prefix; } set { this.prefix = value; } }   /// <summary> /// Initializes the current object. Called at compile time by PostSharp. /// </summary> /// <param name="method">Method to which the current instance is /// associated.</param> public override void CompileTimeInitialize( MethodBase method ) { // We just initialize our fields. They will be serialized at compile-time // and deserialized at runtime. this.formatStrings = Formatter.GetMethodFormatStrings( method ); this.prefix = Formatter.NormalizePrefix( this.prefix ); }   public override Type GetExceptionType( MethodBase method ) { return this.exceptionType; }   /// <summary> /// Method executed when an exception occurs in the methods to which the current /// custom attribute has been applied. We just write a record to the tracing /// subsystem. /// </summary> /// <param name="context">Event arguments specifying which method /// is being called and with which parameters.</param> public override void OnException( MethodExecutionEventArgs context ) { string message = String.Format("{0}Exception {1} {{{2}}} in {{{3}}}. \r\n\r\nStack Trace {4}", this.prefix, context.Exception.GetType().Name, context.Exception.Message, this.formatStrings.Format(context.Instance, context.Method, context.GetReadOnlyArgumentArray()), context.Exception.StackTrace); if(!string.IsNullOrEmpty(fileName)) { ApplicationLogger.LogException(message, fileName); } else { ApplicationLogger.LogException(message, Source.UtilityService); } } } } To use the above below is the unit test [TestMethod] [ExpectedException(typeof(NotImplementedException))] public void TestMethod1() { MethodThrowingExceptionForLog(); try { MethodThrowingExceptionForLogWithPostSharp(); } catch (NotImplementedException ex) { throw ex; } }   private void MethodThrowingExceptionForLog() { try { throw new NotImplementedException(); } catch (NotImplementedException ex) { // create file and then write log ApplicationLogger.TraceMessage("this is a trace message which will be logged in Test1MyFile", @"D:\EL\Test1Myfile.txt"); ApplicationLogger.TraceMessage("this is a trace message which will be logged in YetAnotherTest1Myfile", @"D:\EL\YetAnotherTest1Myfile.txt"); } }   // Automatically log details using attributes // Log exception using attributes .... A La WCF [FaultContract(typeof(FaultMessage))] style] [Log(@"D:\EL\Test1MyfileLogPostsharp.txt")] [LogException(typeof(NotImplementedException), @"D:\EL\Test1MyfileExceptionPostsharp.txt")] private void MethodThrowingExceptionForLogWithPostSharp() { throw new NotImplementedException(); } The good thing about the approach is that all the logging and EH is done at centralized location controlled by PostSharp. Of Course, if some other library has to be used instead of EL, it can easily be plugged in. Also, the coder ARE ONLY involved in writing business code in methods, which makes code cleaner. Here is the full source code. The third party assemblies provided are from EL and PostSharp and i presume you will find these useful. Do let me know your thoughts / ideas on the same. Technorati Tags: PostSharp,Enterprize library,C#,Logging,Exception handling

    Read the article

  • Custom Memory Allocator for STL map

    - by Prasoon Tiwari
    This question is about construction of instances of custom allocator during insertion into a std::map. Here is a custom allocator for std::map<int,int> along with a small program that uses it: #include <stddef.h> #include <stdio.h> #include <map> #include <typeinfo> class MyPool { public: void * GetNext() { return malloc(24); } void Free(void *ptr) { free(ptr); } }; template<typename T> class MyPoolAlloc { public: static MyPool *pMyPool; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; typedef T value_type; template<typename X> struct rebind { typedef MyPoolAlloc<X> other; }; MyPoolAlloc() throw() { printf("-------Alloc--CONSTRUCTOR--------%08x %32s\n", this, typeid(T).name()); } MyPoolAlloc(const MyPoolAlloc&) throw() { printf(" Copy Constructor ---------------%08x %32s\n", this, typeid(T).name()); } template<typename X> MyPoolAlloc(const MyPoolAlloc<X>&) throw() { printf(" Construct T Alloc from X Alloc--%08x %32s %32s\n", this, typeid(T).name(), typeid(X).name()); } ~MyPoolAlloc() throw() { printf(" Destructor ---------------------%08x %32s\n", this, typeid(T).name()); }; pointer address(reference __x) const { return &__x; } const_pointer address(const_reference __x) const { return &__x; } pointer allocate(size_type __n, const void * hint = 0) { if (__n != 1) perror("MyPoolAlloc::allocate: __n is not 1.\n"); if (NULL == pMyPool) { pMyPool = new MyPool(); printf("======>Creating a new pool object.\n"); } return reinterpret_cast<T*>(pMyPool->GetNext()); } //__p is not permitted to be a null pointer void deallocate(pointer __p, size_type __n) { pMyPool->Free(reinterpret_cast<void *>(__p)); } size_type max_size() const throw() { return size_t(-1) / sizeof(T); } void construct(pointer __p, const T& __val) { printf("+++++++ %08x %s.\n", __p, typeid(T).name()); ::new(__p) T(__val); } void destroy(pointer __p) { printf("-+-+-+- %08x.\n", __p); __p->~T(); } }; template<typename T> inline bool operator==(const MyPoolAlloc<T>&, const MyPoolAlloc<T>&) { return true; } template<typename T> inline bool operator!=(const MyPoolAlloc<T>&, const MyPoolAlloc<T>&) { return false; } template<typename T> MyPool* MyPoolAlloc<T>::pMyPool = NULL; int main(int argc, char *argv[]) { std::map<int, int, std::less<int>, MyPoolAlloc<std::pair<const int,int> > > m; //random insertions in the map m.insert(std::pair<int,int>(1,2)); m[5] = 7; m[8] = 11; printf("======>End of map insertions.\n"); return 0; } Here is the output of this program: -------Alloc--CONSTRUCTOR--------bffcdaa6 St4pairIKiiE Construct T Alloc from X Alloc--bffcda77 St13_Rb_tree_nodeISt4pairIKiiEE St4pairIKiiE Copy Constructor ---------------bffcdad8 St13_Rb_tree_nodeISt4pairIKiiEE Destructor ---------------------bffcda77 St13_Rb_tree_nodeISt4pairIKiiEE Destructor ---------------------bffcdaa6 St4pairIKiiE ======Creating a new pool object. Construct T Alloc from X Alloc--bffcd9df St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE +++++++ 0985d028 St4pairIKiiE. Destructor ---------------------bffcd9df St4pairIKiiE Construct T Alloc from X Alloc--bffcd95f St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE +++++++ 0985d048 St4pairIKiiE. Destructor ---------------------bffcd95f St4pairIKiiE Construct T Alloc from X Alloc--bffcd95f St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE +++++++ 0985d068 St4pairIKiiE. Destructor ---------------------bffcd95f St4pairIKiiE ======End of map insertions. Construct T Alloc from X Alloc--bffcda23 St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE -+-+-+- 0985d068. Destructor ---------------------bffcda23 St4pairIKiiE Construct T Alloc from X Alloc--bffcda43 St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE -+-+-+- 0985d048. Destructor ---------------------bffcda43 St4pairIKiiE Construct T Alloc from X Alloc--bffcda43 St4pairIKiiE St13_Rb_tree_nodeISt4pairIKiiEE -+-+-+- 0985d028. Destructor ---------------------bffcda43 St4pairIKiiE Destructor ---------------------bffcdad8 St13_Rb_tree_nodeISt4pairIKiiEE Last two columns of the output show that an allocator for std::pair<const int, int> is constructed everytime there is a insertion into the map. Why is this necessary? Is there a way to suppress this? Thanks! Edit: This code tested on x86 machine with g++ version 4.1.2. If you wish to run it on a 64-bit machine, you'll have to change at least the line return malloc(24). Changing to return malloc(48) should work.

    Read the article

  • What are developer's problems with helpful error messages?

    - by Moo-Juice
    It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice, it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on.

    Read the article

  • Stuck due to "knowing too much"

    - by Ran Biron
    Note more discussion at http://news.ycombinator.com/item?id=4037794 Welcome Hacker News Visitors! While HN is a fine forum for discussion and debate, Programmers - Stack Exchange is not. From the FAQ: If your motivation for asking the question is “I would like to participate in a discussion about ____”, then you should not be asking here. However, if your motivation is “I would like others to explain ____ to me”, then you are probably OK. (Discussions are of course welcome in our real time web chat.) Currently, this question is viewed by the membership of Programmers.SE as more likely to provoke unproductive discussion than constructive answers; while debates on its form and future are conducted, it will be locked to prevent arguments and vandalism. -- Shog9 I have a relatively simple development task, but every time I try to attack it, I end up spiraling in deep thoughts - how could it extending the future, what are the 2nd generation clients going to need, how does it affect "non functional" aspects (e.g. Performance, authorization...), how would it best be architectured to allow change... I remember myself a while ago, younger and, perhaps, more eager. The "me" I was then wouldn't have given a thought about all that - he would've gone ahead and wrote something, then rewrote it, then rewrote it again (and again...). The "me" today is more hesitant, more careful. I find it much easier today to sit and plan and instruct other people on how to do things than to actually go ahead and do them myself - not because I don't like to code - the opposite, I love to! - but because every time I sit at the keyboard, I end up in that same annoying place. Is this wrong? Is this a natural evolution, or did I drive myself into a rut? Fair disclosure - in the past I was a developer, today my job title is a "system architect". Good luck figuring what it means - but that's the title. Wow. I honestly didn't expect this question to generate that many responses. I'll try to sum it up. Reasons: Analysis paralysis / Over engineering / gold plating / (any other "too much thinking up-front can hurt you"). Too much experience for the given task. Not focusing on what's important. Not enough experience (and realizing that). Solutions (not matched to reasons): Testing first. Start coding (+ for fun) One to throw away (+ one API to throw away). Set time constraints. Strip away the fluff, stay with the stuff. Make flexible code (kinda opposite to "one to throw away", no?). Thanks to everyone - I think the major benefit here was to realize that I'm not alone in this experience. I have, actually, already started coding and some of the too-big things have fallen off, naturally. Since this question is closed, I'll accept the answer with most votes as of today. When/if it changes - I'll try to follow.

    Read the article

  • SQL SERVER – A Puzzle Part 4 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    It seems like every weekend I get a new puzzle in my mind. Before continuing I suggest you read my previous posts here where I have shared earlier puzzles. A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value  A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value A Puzzle Part 3 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value After reading above three posts, I am very confident that you all will be ready for the next set of puzzles now. First execute the script which I have written here. Now guess what will be the next value as requested in the query. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS DECIMAL(3,0) START WITH 1 INCREMENT BY -1 MINVALUE 1 MAXVALUE 3 CYCLE NO CACHE; GO SELECT next value FOR dbo.SequenceID; -- Guess the number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Please note that Starting value is 1, Increment value is the negative value of -1 and Minimum value is 3. Now let us first assume how this will work out. In our example of the sequence starting value is equal to 1 and decrement value is -1, this means the value should decrement from 1 to 0. However, the minimum value is 1. This means the value cannot further decrement at all. What will happen here? The natural assumption is that it should throw an error. How many of you are assuming about query will throw an ERROR? Well, you are WRONG! Do not blame yourself, it is my fault as I have told you only half of the story. Now if you have voted for error, let us continue running above code in SQL Server Management Studio. The above script will give the following output: Isn’t it interesting that instead of error out it is giving us result value 3. To understand the answer about the same, carefully observe the original syntax of creating SEQUENCE – there is a keyword CYCLE. This keyword cycles the values between the minimum and maximum value and when one of the range is exhausted it cycles the values from the other end of the cycle. As we have negative incremental value when query reaches to the minimum value or lower end it will cycle it from the maximum value. Here the maximum value is 3 so the next logical value is 3. If your business requirement is such that if sequence reaches the maximum or minimum value, it should throw an error, you should not use the keyword cycle, and it will behave as discussed. I hope, you are enjoying the puzzles as much as I am enjoying it. If you have any interesting puzzle to share, please do share with me and I will share this on blog with due credit to you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Subterranean IL: Exception handling 2

    - by Simon Cooper
    Control flow in and around exception handlers is tightly controlled, due to the various ways the handler blocks can be executed. To start off with, I'll describe what SEH does when an exception is thrown. Handling exceptions When an exception is thrown, the CLR stops program execution at the throw statement and searches up the call stack looking for an appropriate handler; catch clauses are analyzed, and filter blocks are executed (I'll be looking at filter blocks in a later post). Then, when an appropriate catch or filter handler is found, the stack is unwound to that handler, executing successive finally and fault handlers in their own stack contexts along the way, and program execution continues at the start of the catch handler. Because catch, fault, finally and filter blocks can be executed essentially out of the blue by the SEH mechanism, without any reference to preceding instructions, you can't use arbitary branches in and out of exception handler blocks. Instead, you need to use specific instructions for control flow out of handler blocks: leave, endfinally/endfault, and endfilter. Exception handler control flow try blocks You cannot branch into or out of a try block or its handler using normal control flow instructions. The only way of entering a try block is by either falling through from preceding instructions, or by branching to the first instruction in the block. Once you are inside a try block, you can only leave it by throwing an exception or using the leave <label> instruction to jump to somewhere outside the block and its handler. The leave instructions signals the CLR to execute any finally handlers around the block. Most importantly, you cannot fall out of the block, and you cannot use a ret to return from the containing method (unlike in C#); you have to use leave to branch to a ret elsewhere in the method. As a side effect, leave empties the stack. catch blocks The only way of entering a catch block is if it is run by the SEH. At the start of the block execution, the thrown exception will be the only thing on the stack. The only way of leaving a catch block is to use throw, rethrow, or leave, in a similar way to try blocks. However, one thing you can do is use a leave to branch back to an arbitary place in the handler's try block! In other words, you can do this: .try { // ... newobj instance void [mscorlib]System.Exception::.ctor() throw MidTry: // ... leave.s RestOfMethod } catch [mscorlib]System.Exception { // ... leave.s MidTry } RestOfMethod: // ... As far as I know, this mechanism is not exposed in C# or VB. finally/fault blocks The only way of entering a finally or fault block is via the SEH, either as the result of a leave instruction in the corresponding try block, or as part of handling an exception. The only way to leave a finally or fault block is to use endfinally or endfault (both compile to the same binary representation), which continues execution after the finally/fault block, or, if the block was executed as part of handling an exception, signals that the SEH can continue walking the stack. filter blocks I'll be covering filters in a separate blog posts. They're quite different to the others, and have their own special semantics. Phew! Complicated stuff, but it's important to know if you're writing or outputting exception handlers in IL. Dealing with the C# compiler is probably best saved for the next post.

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • .NET Remoting Exception not handled Client-Side

    - by DanJo519
    I checked the rest of the remoting questions, and this specific case did not seem to be addressed. I have a .NET Remoting server/client set up. On the server side I have an object with a method that can throw an exception, and a client which will try to call that method. Server: public bool MyEquals(Guid myGuid, string a, string b) { if (CheckAuthentication(myGuid)) { logger.Debug("Request for \"" + a + "\".Equals(\"" + b + "\")"); return a.Equals(b); } else { throw new AuthenticationException(UserRegistryService.USER_NOT_REGISTERED_EXCEPTION_TEXT); } } Client: try { bool result = RemotedObject.MyEquals(myGuid, "cat", "dog"); } catch (Services.Exceptions.AuthenticationException e) { Console.WriteLine("You do not have permission to execute that action"); } When I call MyEquals with a Guid which causes CheckAuthentication to return false, .NET tries to throw the exception and says the AuthenticationException was unhandled. This happens server side. The exception is never marshaled over to the client-side, and I cannot figure out why. All of the questions I have looked at address the issue of an exception being handled client-side, but it isn't the custom exception but a base type. In my case, I can't even get any exception to cross the remoting boundary to the client. Here is a copy of AuthenticationException. It is in the shared library between both server and client. [Serializable] public class AuthenticationException : ApplicationException, ISerializable { public AuthenticationException(string message) : base(message) { } public AuthenticationException(SerializationInfo info, StreamingContext context) : base(info, context) { } #region ISerializable Members void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) { base.GetObjectData(info, context); } #endregion }

    Read the article

  • SQL Server Express 2005 Merge Replication using RMO causes Null Reference exception

    - by Craig Shearer
    I'm trying to use RMO to programmatically perform merge synchronization. I've basically copied the SQL Server example code, as follows: // Create a connection to the Subscriber. ServerConnection conn = new ServerConnection(subscriberName); MergePullSubscription subscription; try { // Connect to the Subscriber. conn.Connect(); // Define the pull subscription. subscription = new MergePullSubscription(subscriptionDbName, publisherName, publicationDbName, publicationName, conn, false); // If the pull subscription exists, then start the synchronization. if (subscription.LoadProperties()) { // Check that we have enough metadata to start the agent. if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null) { subscription.SynchronizationAgent.Synchronize(); } else { throw new ApplicationException("There is insufficent metadata to " + "synchronize the subscription. Recreate the subscription with " + "the agent job or supply the required agent properties at run time."); } } else { // Do something here if the pull subscription does not exist. throw new ApplicationException(String.Format( "A subscription to '{0}' does not exist on {1}", publicationName, subscriberName)); } } catch (Exception ex) { // Implement appropriate error handling here. throw new ApplicationException("The subscription could not be " + "synchronized. Verify that the subscription has " + "been defined correctly.", ex); } finally { conn.Disconnect(); } I've got the server merge publication defined correctly, but when I run the above code, I get a null reference exception on the call to: subscription.SynchronizationAgent.Synchronize(); The stack trace is as follows: at Microsoft.SqlServer.Replication.MergeSynchronizationAgent.StatusEventSinkMethod(String message, Int32 percent, Int32* returnValue) at Test.ConsoleTest.Program.SynchronizePullSubscription() in F:\Visual Studio Projects\Test\source\Test.ConsoleTest\Program.cs:line 124 It seems, from the stack trace, like something to do with the Status event, but I don't have a handler defined, and defining one makes no difference.

    Read the article

  • Fix common library functions, or abandon then?

    - by Ian Boyd
    Imagine i have a function with a bug in it: Boolean MakeLocation(String City, String State) { //Given "Springfield", "MO" //return "Springfield, MO" return City+", "+State; } So the call: MakeLocation("Springfield", "MO"); would return "Springfield, MO" Now there's a slight problem, what if the user called: MakeLocation("Springfield, MO", "OH"); The called it wrong, obviously. But the function would return "Springfield, MO, OH". The system was functioning like this for many years, until i noticed the function being used wrong, and i corrected it. And i also updated the original function to catch such an obvious mistake - in case it's happening elsewhere: Boolean MakeLocation(String City, String State) { //Given "Springfield", "MO" //return "Springfield, MO" if (City.Contains, ",") throw new EMakeLocationException("City name contains a comma. You probably didn't mean that"); return City+", "+State; } And testing showed the problem fixed. Except we missed an edge case, and the customer found it. So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one: Boolean MakeLocation(String City, String State) { //Given "Springfield", "MO" //return "Springfield, MO" return City+", "+State; } Boolean MakeLocation2(String City, String State) { //Given "Springfield", "MO" //return "Springfield, MO" if (City.Contains, ",") throw new EMakeLocationException("City name contains a comma. You probably didn't mean that"); return City+", "+State; } The same can apply for anything: Question FetchQuestion(Int id) { if (id == 0) throw new EFetchQuestionException("No question ID specified"); ... } Do you risk breaking existing code, at the expense of existing code being wrong?

    Read the article

  • How to clean-up an Entity Framework object context?

    - by Daniel Brückner
    I am adding several entities to an object context. try { forach (var document in documents) { this.Validate(document); // May throw a ValidationException. this.objectContext.AddToDocuments(document); } this.objectContext.SaveChanges(); } catch { // How to clean-up the object context here? throw; } If some of the documents pass the the validation and one fails, all documents that passed the validation remain added to the object context. I have to clean-up the object context because it may be reused and the following can happen. var documentA = new Document { Id = 1, Data = "ValidData" }; var documentB = new Document { Id = 2, Data = "InvalidData" }; var documentC = new Document { Id = 3, Data = "ValidData" }; try { // Adding document B will cause a ValidationException but only // after document A is added to the object context. this.DocumentStore.AddDocuments(new[] { documentA, documentB, documentC }); } catch (ValidationException) { } // Try again without the invalid document B. this.DocumentStore.AddDocuments(new[] { documentA, documentC }); This will again add document A to the object context and in consequence SaveChanges() will throw an exception because of a duplicate primary key. So I have to remove all already added documents in the case of an validation error. I could of course perform the validation first and only add all documents after they have been successfully validated. But sadly this does not solve the whole problem - if SaveChanges() fails all documents still remain add but unsaved. I tried to detach all objects returned by this.objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added) but I am getting a exception stating that the object is not attached. So how do I get rid of all added but unsaved objects?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >