Search Results

Search found 640 results on 26 pages for 'apophenia overload'.

Page 23/26 | < Previous Page | 19 20 21 22 23 24 25 26  | Next Page >

  • how to organise my abstraction?

    - by DaveM
    I have a problem that I can't decide how to best handle, so I'm asking for advice. Please bear with me if my question isn't clear, it is possiblybecause I'm not sure exacly how to solve! I have a set of function that I have in a class. These function are a set of lowest commonality. To be able to run this I need to generate certain info. But this info can arrive with my class from one of 2 routes. I'll try to summarise my situation.... Lets say that I have a class as follows. public class onHoliday(){ private Object modeOfTravel; private Object location; public onHoliday(Object vehicle, Location GPScoords) { private boolean haveFun(){//function to have fun, needs 4 people }//end haveFun() } } Lets imagine I can get to my holiday either by car or by bike. my haveFun() function is dependant my type of vehicle. But only loosely. I have another function that determines my vehicle type, and extracts the required values. for example if I send a car I may get 4 people in one go, but if I send I bike I need at least 2 to get the required 4 I have currently 2 options: Overload my constructor, so as I can send either 2 bikes or a single car into it, then I can call one of 2 intermediate functions (to get the names of my 4 people for instance) before I haveFun() - this is what I am currently doing. split the 2 constructors into 2 separate classes, and repeat my haveFun() in a third class, that becomes an object of my 2 other classes. my problem with this is that my intermediate functions are all but a few lines of code, and I don't want to have them in a separate file! (I allways put classes in separate files!) Please note, my haveFun() isn't something that I'm going to need outside of these 2 classes, or even being onHoliday (ie. there is no chance of me doing some haveFun() over a weekend or of an evening!). I have though about putting haveFun() into an interface, but it seems a bit worthless having an interface with only a single method! Even then I would have to have the method in both of the classes -one for bike and another for car! I have thought about having my onHoliday class accepting any object type, but then I don't want someone accidentally sending in a boat to my onHoliday class (imagine I can't swim, so its not about to happen). It may be important to note that my onHoliday class is package private, and final. It in fact is only accessed via other 'private methods' in other classes, and has only private methods itself. Thanks in advance. David

    Read the article

  • Capture Highlighted Text from any window using C#

    - by dineshrekula
    How to read the highlighted/Selected Text from any window using c#. i tried 2 approaches. Send "^c" whenever user selects some thing. But in this case my clipboard is flooded with lots of unnecessary data. Sometime it copied passwords also. so i switched my approach to 2nd method, send message method. see this sample code [DllImport("user32.dll")] static extern int GetFocus(); [DllImport("user32.dll")] static extern bool AttachThreadInput(uint idAttach, uint idAttachTo, bool fAttach); [DllImport("kernel32.dll")] static extern uint GetCurrentThreadId(); [DllImport("user32.dll")] static extern uint GetWindowThreadProcessId(int hWnd, int ProcessId); [DllImport("user32.dll") ] static extern int GetForegroundWindow(); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)] static extern int SendMessage(int hWnd, int Msg, int wParam, StringBuilder lParam); // second overload of SendMessage [DllImport("user32.dll")] private static extern int SendMessage(IntPtr hWnd, uint Msg, out int wParam, out int lParam); const int WM_SETTEXT = 12; const int WM_GETTEXT = 13; private string PerformCopy() { try { //Wait 5 seconds to give us a chance to give focus to some edit window, //notepad for example System.Threading.Thread.Sleep(5000); StringBuilder builder = new StringBuilder(500); int foregroundWindowHandle = GetForegroundWindow(); uint remoteThreadId = GetWindowThreadProcessId(foregroundWindowHandle, 0); uint currentThreadId = GetCurrentThreadId(); //AttachTrheadInput is needed so we can get the handle of a focused window in another app AttachThreadInput(remoteThreadId, currentThreadId, true); //Get the handle of a focused window int focused = GetFocus(); //Now detach since we got the focused handle AttachThreadInput(remoteThreadId, currentThreadId, false); //Get the text from the active window into the stringbuilder SendMessage(focused, WM_GETTEXT, builder.Capacity, builder); return builder.ToString(); } catch (System.Exception oException) { throw oException; } } this code working fine in Notepad. But if i try to capture from another applications like Mozilla firefox, or Visual Studio IDE, it's not returning the text. Can anybody please help me, where i am doing wrong? First of all, i have chosen the right approach?

    Read the article

  • Use a subdirectory as root with htaccess in Apache 1.3

    - by Andrew
    I'm trying to deploy a site generated with Jekyll and would like to keep the site in its own subfolder on my server to keep everything more organized. Essentially, I'd like to use the contents of /jekyll as the root unless a file similarly named exists in the actual web root. So something like /jekyll/sample-page/ would show as http://www.example.com/sample-page/, while something like /other-folder/ would display as http://www.example.com/other-folder. My test server runs Apache 2.2 and the following .htaccess (adapted from http://gist.github.com/97822) works flawlessly: RewriteEngine On # Map http://www.example.com to /jekyll. RewriteRule ^$ /jekyll/ [L] # Map http://www.example.com/x to /jekyll/x unless there is a x in the web root. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/jekyll/ RewriteRule ^(.*)$ /jekyll/$1 # Add trailing slash to directories without them so DirectoryIndex works. # This does not expose the internal URL. RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule ^(.*)$ $1/ # Disable auto-adding slashes to directories without them, since this happens # after mod_rewrite and exposes the rewritten internal URL, e.g. turning # http://www.example.com/about into http://www.example.com/jekyll/about. DirectorySlash off However, my production server runs Apache 1.3, which doesn't allow DirectorySlash. If I disable it, the server gives a 500 error because of internal redirect overload. If I comment out the last section of ReWriteConds and rules: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule ^(.*)$ $1/ …everything mostly works: http://www.example.com/sample-page/ displays the correct content. However, if I omit the trailing slash, the URL in the address bar exposes the real internal URL structure: http://www.example.com/jekyll/sample-page/ What is the best way to account for directory slashes in Apache 1.3, where useful tools like DirectorySlash don't exist? How can I use the /jekyll/ directory as the site root without revealing the actual URL structure? Edit: After a ton of research into Apache 1.3, I've found that this problem is essentially a combination of two different issues listed at the Apache 1.3 URL Rewriting Guide. I have a (partially) moved DocumentRoot, which in theory would be taken care of with something like this: RewriteRule ^/$ /e/www/ [R] I also have the infamous "Trailing Slash Problem," which is solved by setting the RewriteBase (as was suggested in one of the responses below): RewriteBase /~quux/ RewriteRule ^foo$ foo/ [R] The problem is combining the two. Moving the document root doesn't (can't?) use RewriteBase—fixing trailing slashes requires(?) it… Hmm…

    Read the article

  • Overloading *(iterator + n) and *(n + iterator) in a C++ iterator class?

    - by exscape
    (Note: I'm writing this project for learning only; comments about it being redundant are... uh, redundant. ;) I'm trying to implement a random access iterator, but I've found very little literature on the subject, so I'm going by trial and error combined with Wikpedias list of operator overload prototypes. It's worked well enough so far, but I've hit a snag. Code such as exscape::string::iterator i = string_instance.begin(); std::cout << *i << std::endl; works, and prints the first character of the string. However, *(i + 1) doesn't work, and neither does *(1 + i). My full implementation would obviously be a bit too much, but here's the gist of it: namespace exscape { class string { friend class iterator; ... public: class iterator : public std::iterator<std::random_access_iterator_tag, char> { ... char &operator*(void) { return *p; // After some bounds checking } char *operator->(void) { return p; } char &operator[](const int offset) { return *(p + offset); // After some bounds checking } iterator &operator+=(const int offset) { p += offset; return *this; } const iterator operator+(const int offset) { iterator out (*this); out += offset; return out; } }; }; } int main() { exscape::string s = "ABCDEF"; exscape::string::iterator i = s.begin(); std::cout << *(i + 2) << std::endl; } The above fails with (line 632 is, of course, the *(i + 2) line): string.cpp: In function ‘int main()’: string.cpp:632: error: no match for ‘operator*’ in ‘*exscape::string::iterator::operator+(int)(2)’ string.cpp:105: note: candidates are: char& exscape::string::iterator::operator*() *(2 + i) fails with: string.cpp: In function ‘int main()’: string.cpp:632: error: no match for ‘operator+’ in ‘2 + i’ string.cpp:434: note: candidates are: exscape::string exscape::operator+(const char*, const exscape::string&) My guess is that I need to do some more overloading, but I'm not sure what operator I'm missing.

    Read the article

  • PHP mysqli wrapper: passing by reference with __call() and call_user_func_array()

    - by Dave
    Hi everyone. I'm a long running fan of stackoverflow, first time poster. I'd love to see if someone can help me with this. Let me dig in with a little code, then I'll explain my problem. I have the following wrapper classes: class mysqli_wrapper { private static $mysqli_obj; function __construct() // Recycles the mysqli object { if (!isset(self::$mysqli_obj)) { self::$mysqli_obj = new mysqli(MYSQL_SERVER, MYSQL_USER, MYSQL_PASS, MYSQL_DBNAME); } } function __call($method, $args) { return call_user_func_array(array(self::$mysqli_obj, $method), $args); } function __get($para) { return self::$mysqli_obj->$para; } function prepare($query) // Overloaded, returns statement wrapper { return new mysqli_stmt_wrapper(self::$mysqli_obj, $query); } } class mysqli_stmt_wrapper { private $stmt_obj; function __construct($link, $query) { $this->stmt_obj = mysqli_prepare($link, $query); } function __call($method, $args) { return call_user_func_array(array($this->stmt_obj, $method), $args); } function __get($para) { return $this->stmt_obj->$para; } // Other methods will be added here } My problem is that when I call bind_result() on the mysqli_stmt_wrapper class, my variables don't seem to be passed by reference and nothing gets returned. To illustrate, if I run this section of code, I only get NULL's: $mysqli = new mysqli_wrapper; $stmt = $mysqli->prepare("SELECT cfg_key, cfg_value FROM config"); $stmt->execute(); $stmt->bind_result($cfg_key, $cfg_value); while ($stmt->fetch()) { var_dump($cfg_key); var_dump($cfg_value); } $stmt->close(); I also get a nice error from PHP which tells me: PHP Warning: Parameter 1 to mysqli_stmt::bind_result() expected to be a reference, value given in test.php on line 48 I've tried to overload the bind_param() function, but I can't figure out how to get a variable number of arguments by reference. func_get_args() doesn't seem to be able to help either. If I pass the variables by reference as in $stmt->bind_result(&$cfg_key, &$cfg_value) it should work, but this is deprecated behaviour and throws more errors. Does anyone have some ideas around this? Thanks so much for your time.

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Binding functions of derived class with luabind

    - by Anamon
    I am currently developing a plugin-based system in C++ which provides a Lua scripting interface, for which I chose to use luabind. I'm using Lua 5 and luabind 0.9, both statically linked and compiled with MSVC++ 8. I am now having trouble binding functions with luabind when they are defined in a derived class, but not its parent class. More specifically, I have an abstract base class called 'IPlugin' from which all plugin classes inherit. When the plugin manager initialises, it registers that class and its functions like this: luabind::open(L); luabind::module(L) [ luabind::class_("IPlugin") .def("start", (void(IPlugin::*)())&IPlugin::start) ]; As it is only known at runtime what effective plugin classes are available, I had to solve loading plugins in a kind of roundabout way. The plugin manager exposes a factory function to Lua, which takes the name of a plugin class and a desired object name. The factory then creates the object, registers the plugin's class as inheriting from the 'IPlugin' base class, and immediately calls a function on the created object that registers itself as a global with the Lua state, like this: void PluginExample::registerLuaObject(lua_State *L, string a_name) { luabind::globals(L)[a_name] = (PluginExample*)this; } I initially did this because I had problems with Lua determining the most derived class of the object, as if I register it from the StreamManager it is only known as a subtype of 'IPlugin' and not the specific subtype. I'm not sure anymore if this is even necessary though, but it works and the created object is subsequently accessible from Lua under 'a_name'. The problem I have, though, is that functions defined in the derived class, which were not declared at all in the parent class, cannot be used. Virtual functions defined in the base class, such as 'start' above, work fine, and calling them from Lua on the new object runs the respective redefined code from the 'PluginExample' class. But if I add a new function to 'PluginExample', here for example a function taking no arguments and returning void, and register it like this: luabind::module(L) [ luabind::class_("PluginExample") .def(luabind::constructor()) .def("func", &PluginExample::func) ]; calling 'func' on the new object yields the following Lua runtime error: No matching overload found, candidates: void func(PluginExample&) I am correctly using the ':' syntax so the 'self' argument is not needed and it seems suddenly Lua cannot determine the derived type of the object anymore. I am sure I am doing something wrong, probably having to do with the two-step binding required by my system architecture, but I can't figure out where. I'd much appreciate some help =)

    Read the article

  • Why does SFINAE not apply to this?

    - by Simon Buchan
    I'm writing some simple point code while trying out Visual Studio 10 (Beta 2), and I've hit this code where I would expect SFINAE to kick in, but it seems not to: template<typename T> struct point { T x, y; point(T x, T y) : x(x), y(y) {} }; template<typename T, typename U> struct op_div { typedef decltype(T() / U()) type; }; template<typename T, typename U> point<typename op_div<T, U>::type> operator/(point<T> const& l, point<U> const& r) { return point<typename op_div<T, U>::type>(l.x / r.x, l.y / r.y); } template<typename T, typename U> point<typename op_div<T, U>::type> operator/(point<T> const& l, U const& r) { return point<typename op_div<T, U>::type>(l.x / r, l.y / r); } int main() { point<int>(0, 1) / point<float>(2, 3); } This gives error C2512: 'point<T>::point' : no appropriate default constructor available Given that it is a beta, I did a quick sanity check with the online comeau compiler, and it agrees with an identical error, so it seems this behavior is correct, but I can't see why. In this case some workarounds are to simply inline the decltype(T() / U()), to give the point class a default constructor, or to use decltype on the full result expression, but I got this error while trying to simplify an error I was getting with a version of op_div that did not require a default constructor*, so I would rather fix my understanding of C++ rather than to just do what works. Thanks! *: the original: template<typename T, typename U> struct op_div { static T t(); static U u(); typedef decltype(t() / u()) type; }; Which gives error C2784: 'point<op_div<T,U>::type> operator /(const point<T> &,const U &)' : could not deduce template argument for 'const point<T> &' from 'int', and also for the point<T> / point<U> overload.

    Read the article

  • subclassing QList and operator+ overloading

    - by Milen
    I would like to be able to add two QList objects. For example: QList<int> b; b.append(10); b.append(20); b.append(30); QList<int> c; c.append(1); c.append(2); c.append(3); QList<int> d; d = b + c; For this reason, I decided to subclass the QList and to overload the operator+. Here is my code: class List : public QList<int> { public: List() : QList<int>() {} // Add QList + QList friend List operator+(const List& a1, const List& a2); }; List operator+(const List& a1, const List& a2) { List myList; myList.append(a1[0] + a2[0]); myList.append(a1[1] + a2[1]); myList.append(a1[2] + a2[2]); return myList; } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); List b; b.append(10); b.append(20); b.append(30); List c; c.append(1); c.append(2); c.append(3); List d; d = b + c; List::iterator i; for(i = d.begin(); i != d.end(); ++i) qDebug() << *i; return a.exec(); } , the result is correct but I am not sure whether this is a good approach. I would like to ask whether there is better solution?

    Read the article

  • What to Return? Error String, Bool with Error String Out, or Void with Exception

    - by Ranger Pretzel
    I spend most of my time in C# and am trying to figure out which is the best practice for handling an exception and cleanly return an error message from a called method back to the calling method. For example, here is some ActiveDirectory authentication code. Please imagine this Method as part of a Class (and not just a standalone function.) bool IsUserAuthenticated(string domain, string user, string pass, out errStr) { bool authentic = false; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; // No exception thrown? We must be good, then. authentic = true; } catch (Exception e) { errStr = e.Message().ToString(); } return authentic; } The advantages of doing it this way are a clear YES or NO that you can embed right in your If-Then-Else statement. The downside is that it also requires the person using the method to supply a string to get the Error back (if any.) I guess I could overload this method with the same parameters minus the "out errStr", but ignoring the error seems like a bad idea since there can be many reasons for such a failure... Alternatively, I could write a method that returns an Error String (instead of using "out errStr") in which a returned empty string means that the user authenticated fine. string AuthenticateUser(string domain, string user, string pass) { string errStr = ""; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } catch (Exception e) { errStr = e.Message().ToString(); } return errStr; } But this seems like a "weak" way of doing things. Or should I just make my method "void" and just not handle the exception so that it gets passed back to the calling function? void AuthenticateUser(string domain, string user, string pass) { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } This seems the most sane to me (for some reason). Yet at the same time, the only real advantage of wrapping those 2 lines over just typing those 2 lines everywhere I need to authenticate is that I don't need to include the "LDAP://" string. The downside with this way of doing it is that the user has to put this method in a try-catch block. Thoughts? Is there another way of doing this that I'm not thinking of?

    Read the article

  • Double Buffering for Game objects, what's a nice clean generic C++ way?

    - by Gary
    This is in C++. So, I'm starting from scratch writing a game engine for fun and learning from the ground up. One of the ideas I want to implement is to have game object state (a struct) be double-buffered. For instance, I can have subsystems updating the new game object data while a render thread is rendering from the old data by guaranteeing there is a consistent state stored within the game object (the data from last time). After rendering of old and updating of new is finished, I can swap buffers and do it again. Question is, what's a good forward-looking and generic OOP way to expose this to my classes while trying to hide implementation details as much as possible? Would like to know your thoughts and considerations. I was thinking operator overloading could be used, but how do I overload assign for a templated class's member within my buffer class? for instance, I think this is an example of what I want: doublebuffer<Vector3> data; data.x=5; //would write to the member x within the new buffer int a=data.x; //would read from the old buffer's x member data.x+=1; //I guess this shouldn't be allowed If this is possible, I could choose to enable or disable double-buffering structs without changing much code. This is what I was considering: template <class T> class doublebuffer{ T T1; T T2; T * current=T1; T * old=T2; public: doublebuffer(); ~doublebuffer(); void swap(); operator=()?... }; and a game object would be like this: struct MyObjectData{ int x; float afloat; } class MyObject: public Node { doublebuffer<MyObjectData> data; functions... } What I have right now is functions that return pointers to the old and new buffer, and I guess any classes that use them have to be aware of this. Is there a better way?

    Read the article

  • Instance caching in Objective C

    - by zoul
    Hello! I want to cache the instances of a certain class. The class keeps a dictionary of all its instances and when somebody requests a new instance, the class tries to satisfy the request from the cache first. There is a small problem with memory management though: The dictionary cache retains the inserted objects, so that they never get deallocated. I do want them to get deallocated, so that I had to overload the release method and when the retain count drops to one, I can remove the instance from cache and let it get deallocated. This works, but I am not comfortable mucking around the release method and find the solution overly complicated. I thought I could use some hashing class that does not retain the objects it stores. Is there such? The idea is that when the last user of a certain instance releases it, the instance would automatically disappear from the cache. NSHashTable seems to be what I am looking for, but the documentation talks about “supporting weak relationships in a garbage-collected environment.” Does it also work without garbage collection? Clarification: I cannot afford to keep the instances in memory unless somebody really needs them, that is why I want to purge the instance from the cache when the last “real” user releases it. Better solution: This was on the iPhone, I wanted to cache some textures and on the other hand I wanted to free them from memory as soon as the last real holder released them. The easier way to code this is through another class (let’s call it TextureManager). This class manages the texture instances and caches them, so that subsequent calls for texture with the same name are served from the cache. There is no need to purge the cache immediately as the last user releases the texture. We can simply keep the texture cached in memory and when the device gets short on memory, we receive the low memory warning and can purge the cache. This is a better solution, because the caching stuff does not pollute the Texture class, we do not have to mess with release and there is even a higher chance for cache hits. The TextureManager can be abstracted into a ResourceManager, so that it can cache other data, not only textures.

    Read the article

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • What is an overloaded operator in c++?

    - by Jeff
    I realize this is a basic question but I have searched online, been to cplusplus.com, read through my book, and I can't seem to grasp the concept of overloaded operators. A specific example from cplusplus.com is: // vectors: overloading operators example #include <iostream> using namespace std; class CVector { public: int x,y; CVector () {}; CVector (int,int); CVector operator + (CVector); }; CVector::CVector (int a, int b) { x = a; y = b; } CVector CVector::operator+ (CVector param) { CVector temp; temp.x = x + param.x; temp.y = y + param.y; return (temp); } int main () { CVector a (3,1); CVector b (1,2); CVector c; c = a + b; cout << c.x << "," << c.y; return 0; } from http://www.cplusplus.com/doc/tutorial/classes2/ but reading through it I'm still not understanding them at all. I just need a basic example of the point of the overloaded operator (which I assume is the "CVector CVector::operator+ (CVector param)"). There's also this example from wikipedia: Time operator+(const Time& lhs, const Time& rhs) { Time temp = lhs; temp.seconds += rhs.seconds; if (temp.seconds >= 60) { temp.seconds -= 60; temp.minutes++; } temp.minutes += rhs.minutes; if (temp.minutes >= 60) { temp.minutes -= 60; temp.hours++; } temp.hours += rhs.hours; return temp; } from "http://en.wikipedia.org/wiki/Operator_overloading" The current assignment I'm working on I need to overload a ++ and a -- operator. Thanks in advance for the information and sorry about the somewhat vague question, unfortunately I'm just not sure on it at all.

    Read the article

  • overloading new/delete problem

    - by hidayat
    This is my scenario, Im trying to overload new and delete globally. I have written my allocator class in a file called allocator.h. And what I am trying to achieve is that if a file is including this header file, my version of new and delete should be used. So in a header file "allocator.h" i have declared the two functions extern void* operator new(std::size_t size); extern void operator delete(void *p, std::size_t size); I the same header file I have a class that does all the allocator stuff, class SmallObjAllocator { ... }; I want to call this class from the new and delete functions and I would like the class to be static, so I have done this: template<unsigned dummy> struct My_SmallObjectAllocatorImpl { static SmallObjAllocator myAlloc; }; template<unsigned dummy> SmallObjAllocator My_SmallObjectAllocatorImpl<dummy>::myAlloc(DEFAULT_CHUNK_SIZE, MAX_OBJ_SIZE); typedef My_SmallObjectAllocatorImpl<0> My_SmallObjectAllocator; and in the cpp file it looks like this: allocator.cc void* operator new(std::size_t size) { std::cout << "using my new" << std::endl; if(size > MAX_OBJ_SIZE) return malloc(size); else return My_SmallObjectAllocator::myAlloc.allocate(size); } void operator delete(void *p, std::size_t size) { if(size > MAX_OBJ_SIZE) free(p); else My_SmallObjectAllocator::myAlloc.deallocate(p, size); } The problem is when I try to call the constructor for the class SmallObjAllocator which is a static object. For some reason the compiler are calling my overloaded function new when initializing it. So it then tries to use My_SmallObjectAllocator::myAlloc.deallocate(p, size); which is not defined so the program crashes. So why are the compiler calling new when I define a static object? and how can I solve it?

    Read the article

  • Passing custom info to mongrel_rails start

    - by whaka
    One thing I really don't understand is how I can pass custom start-up options to a mongrel instance. I see that a common approach is the use environment variables, but in my environment this is not going to work because my rails application serves many different clients. Much code is shared between clients, but there are also many differences which I implement by subclassing controllers and views to overload or extend existing features or introduce new ones. To make this all work, I simply add the paths to client specific modules the module load path ($:). In order to start the application for a particular client, I could now use an environment variable like say, TARGET=AMAZONE. Unfortunately, on some systems I'm running multiple mongrel clusters, each cluster serving a different client. Some of these systems run under Windows and to start mongrel I installed mongrel_services. Clearly, this makes my environment variable unsuitable. Passing this extra bit of data to the application is proving to be a real challenge. For a start, mongrel_rails service_install will reject any [custom] command line parameters that aren't documented. I'm not too concerned as installing the services using the install program is trivial. However, even if I manage to install mongrel_services such that when run it passes the custom command line option --target to mongrel_rails start, I get an error because mongrel_rails doesn't recognize the switch. So here were the things I looked at: Pass an extra parameter: mongrel_rails start --target XYZ ... use a config file and add target:XYZ, then do: mongrel_rails start -C x:\myapp\myconfig.yml modify the file: Ruby\lib\ruby\gems\1.8\gems\mongrel-1.1.5-x86-mswin32-60\lib\mongrel\command.rb Perhaps I can use the --script option, but all docs that I found on it were for Unix 1 and 2 simply don't work. I played with 4 but never managed it to do anything. So I had no choice but to go with 3. While it is relatively simple, I hate changing ruby library code. Particularly disappointing is that 2 doesn't work. I mean what is so unreasonable about adding other [custom] options in the config file? Actually I think this is a fundamental piece that is missing in rails. Somehow, the application should be able to register and access command line arguments it expects. If anybody has a good idea how to do this more elegantly using the current infrastructure, I have a chocolate fish to give away!!!

    Read the article

  • C++ stream as a parameter when overloading operator<<

    - by TheOm3ga
    I'm trying to write my own logging class and use it as a stream: logger L; L << "whatever" << std::endl; This is the code I started with: #include <iostream> using namespace std; class logger{ public: template <typename T> friend logger& operator <<(logger& log, const T& value); }; template <typename T> logger& operator <<(logger& log, T const & value) { // Here I'd output the values to a file and stdout, etc. cout << value; return log; } int main(int argc, char *argv[]) { logger L; L << "hello" << '\n' ; // This works L << "bye" << "alo" << endl; // This doesn't work return 0; } But I was getting an error when trying to compile, saying that there was no definition for operator<<: pruebaLog.cpp:31: error: no match for ‘operator<<’ in ‘operator<< [with T = char [4]](((logger&)((logger*)operator<< [with T = char [4]](((logger&)(& L)), ((const char (&)[4])"bye")))), ((const char (&)[4])"alo")) << std::endl’ So, I've been trying to overload operator<< to accept this kind of streams, but it's driving me mad. I don't know how to do it. I've been loking at, for instance, the definition of std::endl at the ostream header file and written a function with this header: logger& operator <<(logger& log, const basic_ostream<char,char_traits<char> >& (*s)(basic_ostream<char,char_traits<char> >&)) But no luck. I've tried the same using templates instead of directly using char, and also tried simply using "const ostream& os", and nothing. Another thing that bugs me is that, in the error output, the first argument for operator<< changes, sometimes it's a reference to a pointer, sometimes looks like a double reference...

    Read the article

  • IList<T> and IReadOnlyList<T>

    - by Safak Gür
    My problem is that I have a method that can take a collection as parameter that, Has a Count property Has an integer indexer (get-only) And I don't know what type should this parameter be. I would choose IList<T> before .NET 4.5 since there is no other indexable collection interface for this and arrays implement it, which is a big plus. But .NET 4.5 introduces the new IReadOnlyList<T> interface and I want my method to support that, too. How can I write this method to support both IList<T> and IReadOnlyList<T> without violating the basic principles like DRY? Can I convert IList<T> to IReadOnlyList<T> somehow in an overload? What is the way to go here? Edit: Daniel's answer gave me some pretty ideas, I guess I'll go with this: public void Do<T>(IList<T> collection) { DoInternal(collection, collection.Count, i => collection[i]); } public void Do<T>(IReadOnlyList<T> collection) { DoInternal(collection, collection.Count, i => collection[i]); } private void DoInternal<T>(IEnumerable<T> collection, int count, Func<int, T> indexer) { // Stuff } Or I could just accept a ReadOnlyList<T> and provide an helper like this: public static class CollectionEx { public static IReadOnlyList<T> AsReadOnly<T>(this IList<T> collection) { if (collection == null) throw new ArgumentNullException("collection"); return new ReadOnlyWrapper<T>(collection); } private sealed class ReadOnlyWrapper<T> : IReadOnlyList<T> { private readonly IList<T> _Source; public int Count { get { return _Source.Count; } } public T this[int index] { get { return _Source[index]; } } public ReadOnlyWrapper(IList<T> source) { _Source = source; } public IEnumerator<T> GetEnumerator() { return _Source.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } } } Then I could call Do(array.AsReadOnly())

    Read the article

  • Detect if class has overloaded function fails on Comeau compiler

    - by Frank
    Hi Everyone, I'm trying to use SFINAE to detect if a class has an overloaded member function that takes a certain type. The code I have seems to work correctly in Visual Studio and GCC, but does not compile using the Comeau online compiler. Here is the code I'm using: #include <stdio.h> //Comeau doesnt' have boost, so define our own enable_if_c template<bool value> struct enable_if_c { typedef void type; }; template<> struct enable_if_c< false > {}; //Class that has the overloaded member function class TestClass { public: void Func(float value) { printf( "%f\n", value ); } void Func(int value) { printf( "%i\n", value ); } }; //Struct to detect if TestClass has an overloaded member function for type T template<typename T> struct HasFunc { template<typename U, void (TestClass::*)( U )> struct SFINAE {}; template<typename U> static char Test(SFINAE<U, &TestClass::Func>*); template<typename U> static int Test(...); static const bool Has = sizeof(Test<T>(0)) == sizeof(char); }; //Use enable_if_c to only allow the function call if TestClass has a valid overload for T template<typename T> typename enable_if_c<HasFunc<T>::Has>::type CallFunc(TestClass &test, T value) { test.Func( value ); } int main() { float value1 = 0.0f; int value2 = 0; TestClass testClass; CallFunc( testClass, value1 ); //Should call TestClass::Func( float ) CallFunc( testClass, value2 ); //Should call TestClass::Func( int ) } The error message is: no instance of function template "CallFunc" matches the argument list. It seems that HasFunc::Has is false for int and float when it should be true. Is this a bug in the Comeau compiler? Am I doing something that's not standard? And if so, what do I need to do to fix it?

    Read the article

  • C++ Regarding cin.ignore()

    - by user1578897
    i would hope someone can modify my code as its so buggy. sometime its work, sometime it dont.. so let me explain more.. Text file data is as below Line3D, [70, -120, -3], [-29, 1, 268] Line3D, [25, -69, -33], [-2, -41, 58] To read the above line.. i use the following char buffer[30]; cout << "Please enter filename: "; cin.ignore(); getline(cin,filename); readFile.open(filename.c_str()); //if successfully open if(readFile.is_open()) { //record counter set to 0 numberOfRecords = 0; while(readFile.good()) { //input stream get line by line readFile.getline(buffer,20,','); if(strstr(buffer,"Point3D")) { Point3D point3d_tmp; readFile>>point3d_tmp; // and so on... Then i did a overload on the ifstream for Line3d ifstream& operator>>(ifstream &input,Line3D &line3d) { int x1,y1,z1,x2,y2,z2; //get x1 input.ignore(2); input>>x1; //get y1 input.ignore(); input>>y1; //get z1 input.ignore(); input>>z1; //get x2 input.ignore(4); input>>x2; //get y2 input.ignore(); input>>y2; //get z2 input.ignore(); input>>z2; input.ignore(2); Point3D pt1(x1,y1,z1); Point3D pt2(x2,y2,z2); line3d.setPt1(pt1); line3d.setPt2(pt2); line3d.setLength(); } But the issue is the record work sometime and sometime it dont.. what i mean is if at this point //i add a cout cout << x1 << y1 << z1; cout << x2 << y2 << z2; //its works! Point3D pt1(x1,y1,z1); Point3D pt2(x2,y2,z2); line3d.setPt1(pt1); line3d.setPt2(pt2); line3d.setLength(); but if i take away the cout it dont work. how do i change my cin.ignore() so the data can be handle properly , consider number range is -999 to 999

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • Configuring Wireless on Cisco 851W

    - by Aequitarum Custos
    Either a powersurge or something caused our router's configuration to get wiped, and our last backup was before the wireless network was setup. We have not been able to reconfigure the wireless since then, so was curious if anyone here would be able to determine what configuration is needed. We are using a Cisco 851W running 12.4(15)T9 We would like to use WPA encryption, and have it on the same network as the rest of the office network. Config file is below: User Access Verification Building configuration... Current configuration : 3857 bytes ! version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption no service dhcp ! hostname BOB ! boot-start-marker boot-end-marker ! enable secret 5 ********************* ! no aaa new-model ! ! dot11 syslog no ip source-route ! ! ip cef no ip bootp server ip domain name BOB.com ip name-server 61.11.1.1 ip name-server 61.11.1.2 ! ! ! username BOBB privilege 15 password 7 ************************* ! ! archive log config hidekeys ! ! ip tcp synwait-time 10 ! ! ! interface FastEthernet0 no cdp enable ! interface FastEthernet1 no cdp enable ! interface FastEthernet2 no cdp enable ! interface FastEthernet3 no cdp enable ! interface FastEthernet4 description WAN Connection$ETH-WAN$ ip address 61.11.1.14 255.255.254.0 ip nat outside ip virtual-reassembly duplex auto speed auto no cdp enable ! interface Dot11Radio0 no ip address shutdown ! encryption mode ciphers tkip speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root no cdp enable ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no cdp enable bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding ! interface Dot11Radio0.20 ip access-group Guest-ACL in no cdp enable ! interface Vlan1 description Internal Network ip address 192.168.2.60 255.255.255.0 ip nat inside ip nat enable ip virtual-reassembly ! ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 61.11.2.14 ! ip http server no ip http secure-server ip nat inside source list 1 interface FastEthernet4 overload ! ip access-list extended Guest-ACL deny ip any 192.0.0.0 0.0.0.255 permit ip any any ! access-list 1 permit 192.0.0.0 0.0.0.255 access-list 100 remark SDM_ACL Category=2 access-list 100 permit ip 192.0.0.0 0.0.0.255 any no cdp run ! control-plane ! !

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26  | Next Page >