Search Results

Search found 1037 results on 42 pages for 'lambda calculus'.

Page 18/42 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to type all the math, stat, greek, equations EFFICIENTLY in libreoffice?

    - by kernel_panic
    i am preparing a report related to physics which is full of greek, stat and calculus things, i know there is this question how to insert a greek symbol, but my problem is i cant fiddle with a drop down/ scroll list for for every symbol(my paper in FULL of those), is there a way to do something with my keyboard layout, and turn it into something like the one Tony Stark uses in Ironman(i am not kidding please). i am literally tired for this fiddle-work for half of the day and have completed just 2 sheets, hmmm.

    Read the article

  • Reason for perpetual dynamic DNS updates?

    - by mad_vs
    I'm using dynamic DNS (the "adult" version from RFC 2136, not à la DynDNS), and for a while now I've been seeing my laptops with MacOS 10.6.x churning out updates about every 10 seconds. And seemingly redundant updates at that, as the IP is more or less stable (consumer broadband). I don't remember seeing that frequency in the (distant...) past. The lowest time-to-live that MacOS pushes on the entries is 2 minutes, so I have no clue what's going on. ... Jan 12 13:17:18 lambda named[18683]: info: client 84.208.X.X#48715: updating zone 'dynamic.foldr.org/IN': deleting rrset at 'rCosinus._afpovertcp._tcp.dynamic.foldr.org' SRV Jan 12 13:17:18 lambda named[18683]: info: client 84.208.X.X#48715: updating zone 'dynamic.foldr.org/IN': adding an RR at 'rCosinus._afpovertcp._tcp.dynamic.foldr.org' SRV Jan 12 13:17:26 lambda named[18683]: info: client 84.208.X.X#48715: updating zone 'dynamic.foldr.org/IN': deleting rrset at 'rcosinus.dynamic.foldr.org' AAAA ... Additionally, I can't find out what triggers the updates on the laptop-side. Is this a known problem, and how would I go about debugging it? One of the machines is freshly purchased and installed. The only "major" change was installation of the Miredo client for IPv6/Teredo, but even disabling it didn't make a change (except that AAAA records are no longer published).

    Read the article

  • How can I sort just part of a list using vb .net?

    - by Eyal
    In VB .Net, the Generics Lists have a sort function that accepts IComparer or Comparison. I'd like to sort just part of list. Hopefully I can specify the start index, count of elements to sort, and a lambda function. It looks like you can only use lambda functions to do this if you're sorting the entire list. Is that right or did I miss something?

    Read the article

  • What exactly are administrative redexes after CPS conversion?

    - by eljenso
    In the context of Scheme and CPS conversion, I'm having a little trouble deciding what administrative redexes (lambdas) exactly are: all the lambda expressions that are introduced by the CPS conversion only the lambda expressions that are introduced by the CPS conversion but you wouldn't have written if you did the conversion "by hand" or through a smarter CPS-converter If possible, a good reference would be welcome.

    Read the article

  • Trying to understand the usage of class_eval

    - by eMxyzptlk
    Hello everyone, I'm using the rails-settings gem, and I'm trying to understand how you add functions to ActiveRecord classes (I'm building my own library for card games), and I noticed that this gem uses one of the Meta-programming techniques to add the function to the ActiveRecord::Base class (I'm far from Meta-programming master in ruby, but I'm trying to learn it) module RailsSettings class Railtie < Rails::Railtie initializer 'rails_settings.initialize', :after => :after_initialize do Railtie.extend_active_record end end class Railtie def self.extend_active_record ActiveRecord::Base.class_eval do def self.has_settings class_eval do def settings RailsSettings::ScopedSettings.for_thing(self) end scope :with_settings, :joins => "JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}')", :select => "DISTINCT #{self.table_name}.*" scope :with_settings_for, lambda { |var| { :joins => "JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}') AND settings.var = '#{var}'" } } scope :without_settings, :joins => "LEFT JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}')", :conditions => 'settings.id IS NULL' scope :without_settings_for, lambda { |var| { :joins => "LEFT JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}') AND settings.var = '#{var}'", :conditions => 'settings.id IS NULL' } } end end end end end end What I don't understand is why he uses class_eval on ActiveRecord::Base, wasn't it easier if he just open the ActiveRecord::Base class and define the functions? Specially that there's nothing dynamic in the block (What I mean by dynamic is when you do class_eval or instance_eval on a string containing variables) something like this: module ActiveRecord class Base def self.has_settings class_eval do def settings RailsSettings::ScopedSettings.for_thing(self) end scope :with_settings, :joins => "JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}')", :select => "DISTINCT #{self.table_name}.*" scope :with_settings_for, lambda { |var| { :joins => "JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}') AND settings.var = '#{var}'" } } scope :without_settings, :joins => "LEFT JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}')", :conditions => 'settings.id IS NULL' scope :without_settings_for, lambda { |var| { :joins => "LEFT JOIN settings ON (settings.thing_id = #{self.table_name}.#{self.primary_key} AND settings.thing_type = '#{self.base_class.name}') AND settings.var = '#{var}'", :conditions => 'settings.id IS NULL' } } end end end end I understand the second class_eval (before the def settings) is to define functions on the fly on every class that 'has_settings' right ? Same question here, I think he could use "def self.settings" instead of "class_eval.... def settings", no ?

    Read the article

  • Make: how make all hidden files in the current makefile?

    - by HH
    It traverses to bottom dirs for some unknown reason: Errorsome /bin/sh: .??*: not found make[23]: Entering directory `/m/user/files/dir' make clean Makefile all: make clean #The wildcard is the bug. I want to make all hidden files in the current makefile. #It should match .<some char><some char><any char arbitrary times> make $$(.??*) #I want to replace below-like-tihngs with a wildcard above # make .lambda # make .lambda_t clean: -rm .??* .lambda: #do something .lambda_t:

    Read the article

  • Exporting dates properly formatted on Google Appengine in Python

    - by Chris M
    I think this is right but google appengine seems to get to a certain point and cop-out; Firstly is this code actually right; and secondly is there away to skip the record if it cant output (like an ignore errors and continue)? class TrackerExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'SearchRec', [('__key__', lambda key:key.name(), None), ('WebSite', str, None), ('DateStamp', lambda x: datetime.datetime.strptime(x, '%d-%m-%Y').date(), None), ('IP', str, None), ('UserAgent', str, None)]) Thanks

    Read the article

  • Red Hat Yum not working out of the box?

    - by Tucker
    I have a server runnning Red Hat Enterprise Linux v5.6 in the cloud. My project constraints do not allow me to use another OS. When I created the cloud server, I was able to SSH into it and access the shell. I next ran the command: sudo yum update But the command failed. About a month ago I created another server with the same machine image and didn't have that error. Why is it failing now? The following is the terminal output sudo yum update Loaded plugins: security Repository rhel-server is listed more than once in the configuration Traceback (most recent call last): File "/usr/bin/yum", line 29, in ? yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 309, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 178, in main result, resultmsgs = base.doCommands() File "/usr/share/yum-cli/cli.py", line 345, in doCommands self._getTs(needTsRemove) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 101, in _getTs self._getTsInfo(remove_only) File "/usr/lib/python2.4/site-packages/yum/depsolve.py", line 112, in _getTsInfo pkgSack = self.pkgSack File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 662, in <lambda> pkgSack = property(fget=lambda self: self._getSacks(), File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 502, in _getSacks self.repos.populateSack(which=repos) File "/usr/lib/python2.4/site-packages/yum/repos.py", line 260, in populateSack sack.populate(repo, mdtype, callback, cacheonly) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 168, in populate if self._check_db_version(repo, mydbtype): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 226, in _check_db_version return repo._check_db_version(mdtype) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1233, in _check_db_version repoXML = self.repoXML File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1406, in <lambda> repoXML = property(fget=lambda self: self._getRepoXML(), File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1398, in _getRepoXML self._loadRepoXML(text=self) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1388, in _loadRepoXML return self._groupLoadRepoXML(text, ["primary"]) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1372, in _groupLoadRepoXML if self._commonLoadRepoXML(text): File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 1208, in _commonLoadRepoXML result = self._getFileRepoXML(local, text) File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 989, in _getFileRepoXML cache=self.http_caching == 'all') File "/usr/lib/python2.4/site-packages/yum/yumRepo.py", line 826, in _getFile http_headers=headers, File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 412, in urlgrab return self._mirror_try(func, url, kw) File "/usr/lib/python2.4/site-packages/urlgrabber/mirror.py", line 398, in _mirror_try return func_ref( *(fullurl,), **kwargs ) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 936, in urlgrab return self._retry(opts, retryfunc, url, filename) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 854, in _retry r = apply(func, (opts,) + args, {}) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 922, in retryfunc fo = URLGrabberFileObject(url, filename, opts) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1010, in __init__ self._do_open() File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1093, in _do_open fo, hdr = self._make_request(req, opener) File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 1202, in _make_request fo = opener.open(req) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/site-packages/M2Crypto/m2urllib2.py", line 82, in https_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/usr/lib64/python2.4/httplib.py", line 810, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.4/httplib.py", line 833, in _send_request self.endheaders() File "/usr/lib64/python2.4/httplib.py", line 804, in endheaders self._send_output() File "/usr/lib64/python2.4/httplib.py", line 685, in _send_output self.send(msg) File "/usr/lib64/python2.4/httplib.py", line 652, in send self.connect() File "/usr/lib64/python2.4/site-packages/M2Crypto/httpslib.py", line 47, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 174, in connect ret = self.connect_ssl() File "/usr/lib64/python2.4/site-packages/M2Crypto/SSL/Connection.py", line 167, in connect_ssl return m2.ssl_connect(self.ssl, self._timeout) M2Crypto.SSL.SSLError: certificate verify failed

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • GPU Debugging with VS 11

    - by Daniel Moth
    With VS 11 Developer Preview we have invested tremendously in parallel debugging for both CPU (managed and native) and GPU debugging. I'll be doing a whole bunch of blog posts on those topics, and in this post I just wanted to get people started with GPU debugging, i.e. with debugging C++ AMP code. First I invite you to watch 6 minutes of a glimpse of the C++ AMP debugging experience though this video (ffw to minute 51:54, up until minute 59:16). Don't read the rest of this post, just go watch that video, ideally download the High Quality WMV. Summary GPU debugging essentially means debugging the lambda that you pass to the parallel_for_each call (plus any functions you call from the lambda, of course). CPU debugging means debugging all the code above and below the parallel_for_each call, i.e. all the code except the restrict(direct3d) lambda and the functions that it calls. With VS 11 you have to choose what debugger you want to use for a particular debugging session, CPU or GPU. So you can place breakpoints all over your code, then choose what debugger you want (CPU or GPU), and you'll only be able to hit breakpoints for the code type that the debugger engine understands – the remaining breakpoints will appear as unbound. If you want to hit the unbound breakpoints, you'd have to stop debugging, and start again with the other debugger. Sorry. We suck. We know. But once you are past that limitation, I think you'll find the experience truly rewarding – seriously! Switching debugger engines With the Developer Preview bits, one way to switch the debugger engine is through the project properties – see the screenshots that follow. This one is showing the CPU option selected, which is basically the default that you are all familiar with: This screenshot is showing the GPU option selected, by changing the debugger launcher (notice that this applies for both the local and remote case): You actually do not have to open the project properties just for switching the debugger engine, you can switch the selection from the toolbar in VS 11 Developer Preview too – see following screenshot (the effect is the same as if you opened the project properties and switched there) Breakpoint behavior Here are two screenshots, one showing a debugging session for CPU and the other a debugging session for GPU (notice the unbound breakpoints in each case) …and here is the GPU case (where we cannot bind the CPU breakpoints but can the GPU breakpoint, which is actually hit) Give C++ AMP debugging a try So to debug your C++ AMP code, pull down the drop down under the 'play' button to select the 'GPU C++ Direct3D Compute Debugger' menu option, then hit F5 (or the 'play' button itself). Then you can explore debugging by exploring the menus under the Debug and under the Debug->Windows menus. One way to do that exploration is through the C++ AMP debugging walkthrough on MSDN. Another way to explore the C++ AMP debugging experience, you can use the moth.cpp code file, which is what I used in my BUILD session debugger demo. Note that for my demo I was using the latest internal VS11 bits, so your experience with the Developer Preview bits won't be identical to what you saw me demonstrate, but it shouldn't be far off. Stay tuned for a lot more content on the parallel debugger in VS 11, both CPU and GPU, both managed and native. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is this Hybrid of Interface / Composition kosher?

    - by paul
    I'm working on a project in which I'm considering using a hybrid of interfaces and composition as a single thing. What I mean by this is having a contain*ee* class be used as a front for functionality implemented in a contain*er* class, where the container exposes the containee as a public property. Example (pseudocode): class Visibility(lambda doShow, lambda doHide, lambda isVisible) public method Show() {...} public method Hide() {...} public property IsVisible public event Shown public event Hidden class SomeClassWithVisibility private member visibility = new Visibility(doShow, doHide, isVisible) public property Visibility with get() = visibility private method doShow() {...} private method doHide() {...} private method isVisible() {...} There are three reasons I'm considering this: The language in which I'm working (F#) has some annoyances w.r.t. implementing interfaces the way I need to (unless I'm missing something) and this will help avoid a lot of boilerplate code. The containee classes could really be considered properties of the container class(es); i.e. there seems to be a fairly strong has-a relationship. The containee classes will likely implement code which would have been pretty much the same when implemented in all the container classes, so why not do it once in one place? In the above example, this would include managing and emitting the Shown/Hidden events. Does anyone see any isseus with this Composiface/Intersition method, or know of a better way? EDIT 2012.07.26 - It seems a little background information is warranted: Where I work, we have a bunch of application front-ends that have limited access to system resources -- they need access to these resources to fully function. To remedy this we have a back-end application that can access the needed resources, with which the front-ends can communicate. (There is an API written for the front-ends for accessing back-end functionality as though it were part of the front-end.) The back-end program is out of date and its functionality is incomplete. It has made the transition from company to company a couple of times and we can't even compile it anymore. So I'm trying to rewrite it in my spare time. I'm trying to update things to make a nice(r) interface/API for the front-ends (while allowing for backwards compatibility with older front-ends), hopefully something full of OOPy goodness. The thing is, I don't want to write the front-end API after I've written pretty much the same code in F# for implementing the back-end; so, what I'm planning on doing is applying attributes to classes/methods/properties that I would like to have code for in the API then generate this code from the F# assembly using reflection. The method outlined in this question is a possible alternative I'm considering instead of implementing straight interfaces on the classes in F# because they're kind of a bear: In order to access something of an interface that has been implemented in a class, you have to explicitly cast an instance of that class to the interface type. This would make things painful when getting calls from the front-ends. If you don't want to have to do this, you have to call out all of the interface's methods/properties again in the class, outside of the interface implementation (which is separate from regular class members), and call the implementation's members. This is basically repeating the same code, which is what I'm trying to avoid!

    Read the article

  • Looping Redirect with PyFacebook and Google App Engine

    - by Nick Gotch
    I have a Python Facebook project hosted on Google App Engine and use the following code to handle initialization of the Facebook API using PyFacebook. # Facebook Initialization def initialize_facebook(f): # Redirection handler def redirect(self, url): logger.info('Redirecting the user to: ' + url) self.response.headers.add_header("Cache-Control", "max-age=0") self.response.headers.add_header("Pragma", "no-cache") self.response.out.write('<html><head><script>parent.location.replace(\'' + url + '\');</script></head></html>') return 'Moved temporarily' auth_token = request.params.get('auth_token', None) fbapi = Facebook(settings['FACEBOOK_API_KEY'], settings['FACEBOOK_SECRET_KEY'], auth_token=auth_token) if not fbapi: logger.error('Facebook failed to initialize') if fbapi.check_session(request) or auth_token: pass else: logger.info('User not logged into Facebook') return lambda a: redirect(a, fbapi.get_login_url()) if fbapi.added: pass else: logger.info('User does not have ' + settings['FACEBOOK_APP_NAME'] + ' added') return lambda a: redirect(a, fbapi.get_add_url()) # Return the validated API logger.info('Facebook successfully initialized') return lambda a: f(a, fbapi=fbapi) I'm trying to set it up so that I can drop this decorator on any page handler method and verify that the user has everything set up correctly. The issue is that when the redirect handler gets called, it starts an infinite loop of redirection. I tried using an HTTP 302 redirection in place of the JavaScript but that kept failing too. Does anyone know what I can do to fix this? I saw this similar question but there are no answers.

    Read the article

  • PLT Scheme Memory

    - by Eric
    So I need some help with implementing a Make-memory program using Scheme. I need two messages 'write and 'read. So it would be like (mymem 'write 34 -116) and (mymem 'read 99) right? and (define mymem (make-memory 100)).....How would I implement this in scheme? using an Alist???I need some help coding it. I have this code which makes make-memory a procedure and when you run mymem you get ((99.0)) and what i need to do is recur this so i get an alist with dotted pairs to ((0.0)). So any suggestions on how to code this?? Does anyone have any ideas what I could do to recur and make messages Write and read?? (define make-memory (lambda (n) (letrec ((mem '()) (dump (display mem))) (lambda () (if (= n 0) (cons (cons n 0) mem) mem) (cons (cons (- n 1) 0) mem)) (lambda (msg loc val) (cond ((equal? msg 'read) (display (cons n val))(set! n (- n 1))) ((equal? msg 'write) (set! mem (cons val loc)) (set! n (- n 1)) (display mem))))))) (define mymem (make-memory 100)) Yes this is an assignment but I wrote this code. I just need some help or direction. And yes I do know about variable-length argument lists.

    Read the article

  • LinqKit System.InvalidCastException When Invoking method-provided expression on member property.

    - by mdworkin
    Given a simple parent/child class structure. I want to use linqkit to apply a child lambda expression on the parent. I also want the Lambda expression to be provided by a utility method. public class Foo { public Bar Bar { get; set; } } public class Bar { public string Value { get; set; } public static Expression<Func<Bar, bool>> GetLambdaX() { return c => c.Value == "A"; } } ... Expression<Func<Foo, bool>> lx = c => Bar.GetLambdaX().Invoke(c.Bar); Console.WriteLine(lx.Expand()); The above code throws System.InvalidCastException: Unable to cast object of type 'System.Linq.Expressions.MethodCallExpression' to type 'System.Linq.Expressions.LambdaExpression'. at LinqKit.ExpressionExpander.VisitMethodCall(MethodCallExpression m) at LinqKit.ExpressionVisitor.Visit(Expression exp) at LinqKit.ExpressionVisitor.VisitLambda(LambdaExpression lambda) at LinqKit.ExpressionVisitor.Visit(Expression exp) at LinqKit.Extensions.Expand<TDelegate>(Expression`1 expr) .... Please help!

    Read the article

  • Creating a new workbook in Excel from Python breaks

    - by Marcelo Cantos
    I am trying to use the stock standard win32com approach to drive Excel 2007 from Python. However, when I try to create a new workbook, things go pear-shaped: Python 2.6.4 (r264:75706, Nov 3 2009, 13:23:17) [MSC v.1500 32 bit (Intel)] on win32 ... >>> import win32com.client >>> excel = win32com.client.Dispatch("Excel.Application") >>> wb = excel.Workbooks.Add() Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> wb = excel.Workbooks.Add() File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 467, in __getattr__ if self._olerepr_.mapFuncs.has_key(attr): return self._make_method_(attr) File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 295, in _make_method_ methodCodeList = self._olerepr_.MakeFuncMethod(self._olerepr_.mapFuncs[name], methodName,0) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 297, in MakeFuncMethod return self.MakeDispatchFuncMethod(entry, name, bMakeClass) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 318, in MakeDispatchFuncMethod s = linePrefix + 'def ' + name + '(self' + BuildCallList(fdesc, names, defNamedOptArg, defNamedNotOptArg, defUnnamedArg, defOutArg) + '):' File "C:\Python26\lib\site-packages\win32com\client\build.py", line 604, in BuildCallList argName = MakePublicAttributeName(argName) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 542, in MakePublicAttributeName return filter( lambda char: char in valid_identifier_chars, className) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 542, in <lambda> return filter( lambda char: char in valid_identifier_chars, className) UnicodeDecodeError: 'ascii' codec can't decode byte 0x83 in position 52: ordinal not in range(128) >>> What is going wrong here? Have I done something silly, or is Python/win32com/Excel somehow broken?

    Read the article

  • Converting a Linq expression tree that relies on SqlMethods.Like() for use with the Entity Framework

    - by JohnnyO
    I recently switched from using Linq to Sql to the Entity Framework. One of the things that I've been really struggling with is getting a general purpose IQueryable extension method that was built for Linq to Sql to work with the Entity Framework. This extension method has a dependency on the Like() method of SqlMethods, which is Linq to Sql specific. What I really like about this extension method is that it allows me to dynamically construct a Sql Like statement on any object at runtime, by simply passing in a property name (as string) and a query clause (also as string). Such an extension method is very convenient for using grids like flexigrid or jqgrid. Here is the Linq to Sql version (taken from this tutorial: http://www.codeproject.com/KB/aspnet/MVCFlexigrid.aspx): public static IQueryable<T> Like<T>(this IQueryable<T> source, string propertyName, string keyword) { var type = typeof(T); var property = type.GetProperty(propertyName); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var constant = Expression.Constant("%" + keyword + "%"); var like = typeof(SqlMethods).GetMethod("Like", new Type[] { typeof(string), typeof(string) }); MethodCallExpression methodExp = Expression.Call(null, like, propertyAccess, constant); Expression<Func<T, bool>> lambda = Expression.Lambda<Func<T, bool>>(methodExp, parameter); return source.Where(lambda); } With this extension method, I can simply do the following: someList.Like("FirstName", "mike"); or anotherList.Like("ProductName", "widget"); Is there an equivalent way to do this with Entity Framework? Thanks in advance.

    Read the article

  • Why is python decode replacing more than the invalid bytes from an encoded string?

    - by dangra
    Trying to decode an invalid encoded utf-8 html page gives different results in python, firefox and chrome. The invalid encoded fragment from test page looks like 'PREFIX\xe3\xabSUFFIX' >>> fragment = 'PREFIX\xe3\xabSUFFIX' >>> fragment.decode('utf-8', 'strict') ... UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-8: invalid data What follows is the summary of replacement policies used to handle decoding errors by python, firefox and chrome. Note how the three differs, and specially how python builtin removes the valid S (plus the invalid sequence of bytes). by Python The builtin replace error handler replaces the invalid \xe3\xab plus the S from SUFFIX by U+FFFD >>> fragment.decode('utf-8', 'replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX The python implementation builtin replace error handler looks like: >>> python_replace = lambda exc: (u'\ufffd', exc.end) As expected, trying this gives same result than builtin: >>> codecs.register_error('python_replace', python_replace) >>> fragment.decode('utf-8', 'python_replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX by Firefox Firefox replaces each invalid byte by U+FFFD >>> firefox_replace = lambda exc: (u'\ufffd', exc.start+1) >>> codecs.register_error('firefox_replace', firefox_replace) >>> test_string.decode('utf-8', 'firefox_replace') u'PREFIX\ufffd\ufffdSUFFIX' >>> print _ PREFIX??SUFFIX by Chrome Chrome replaces each invalid sequence of bytes by U+FFFD >>> chrome_replace = lambda exc: (u'\ufffd', exc.end-1) >>> codecs.register_error('chrome_replace', chrome_replace) >>> fragment.decode('utf-8', 'chrome_replace') u'PREFIX\ufffdSUFFIX' >>> print _ PREFIX?SUFFIX The main question is why builtin replace error handler for str.decode is removing the S from SUFFIX. Also, is there any unicode's official recommended way for handling decoding replacements?

    Read the article

  • Scheme - Memory System

    - by Eric
    I am trying to make a memory system where you input something in a slot of memory. So what I am doing is making an Alist and the car of the pairs is the memory location and the cdr is the val. I need the program to understand two messages, Read and Write. Read just displaying the memory location selected and the val that is assigned to that location and write changes the val of the location or address. How do I make my code so it reads the location you want it to and write to the location you want it to? Feel free to test this yourself. Any help would be much appreciated. This is what I have: (define make-memory (lambda (n) (letrec ((mem '()) (dump (display mem))) (lambda () (if (= n 0) (cons (cons n 0) mem) mem) (cons (cons (- n 1) 0) mem)) (lambda (msg loc val) (cond ((equal? msg 'read) (display (cons n val))(set! n (- n 1))) ((equal? msg 'write) (set! mem (cons val loc)) (set! n (- n 1)) (display mem))))))) (define mymem (make-memory 100))

    Read the article

  • Swig: No Constructor defined

    - by wheaties
    I added %allowexcept to my *.i file when building a Python <-- C++ bridge using Swig. Then I removed that preprocessing directive. Now I can't get the Python produced code to recognize the constructor of a C++ class. Here's the class file: #include <exception> class Swig_Foo{ public: Swig_Foo(){} virtual ~Swig_Foo(){} virtual getInt() =0; virtual throwException() { throw std::exception(); } }; And here's the code Swig produces from it: class Swig_Foo: __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, Swig_Foo, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, Swig_Foo, name) def __init__(self): raise AttributeError, "No constructor defined" __repr__ = _swig_repr __swig_destroy__ = _foo.delete_Swig_Foo __del__ = lambda self : None; def getInt(*args): return apply(_foo.Swig_Foo_getInt, args) def throwOut(*args): return apply(_foo.Swig_Foo_throwOut, args) Swig_Foo_swigregister = _foo.Swig_Foo_swigregister Swig_Foo_swigregister(Swig_Foo) The problem is the def __init__self): raise AttributeError, "No constructor defined" portion. It never did this before I added the %allowexception and now that I've removed it, I can't get it to create a real constructor. All the other classes have actual constructors. Quite baffled. Anyone know how I can get it to stop doing this?

    Read the article

  • python VTE Terminal weirdness

    - by mykhal
    i'm trying to use the terminal from python VTE binding (python-vte from debian squeeze) as a virtual terminal emulator (just for ANSI/control chars text processing) in interactive python console, everything looks (almost) all right: >>> import vte >>> term = vte.Terminal() >>> term.feed("a\nb") >>> print repr(term.get_text(lambda *a: True).rstrip()) 'a\n b' however, launching this code (little modified) as python script, different result is yielded: $ python vte_wiredness_1.py '' strangely enough, pasting the code back into the (new) interactive python session also yields empty string: >>> import vte >>> term = vte.Terminal() >>> term.feed("a\nb") >>> print repr(term.get_text(lambda *a: True).rstrip()) '' >>> first thing caming on my mind was that the only difference between the two cases is the timing - there had to be some delay before get_text. unfortunately, preluding get_text with some seconds sleep did not help then i thought it has something to do with X window environment. but the results are the same pure linux console (with some warning on missing graphics). i wonder what causes such an unpredictable behavior (interactive console - pasted vs typed, and it's not the delay.. ant the interactive console has nothing to do with the vte terminal object.. i guess) can someone explain what is happening? is it possible to use the VTE Term such way? that the "b" letter in the output is preceded by the space, is another strangeness (all consecutive lines are preceded by more spaces.. looks like I have to send carriage return before the string.) (the lambda *a: True get_text method argument i'm using is a dummy callback, it's is some SlotSelectedCallback.. for its explanation i'd be grateful as well :) )

    Read the article

  • Closures in Ruby

    - by Isaac Cambron
    I'm kind of new to Ruby and some of the closure logic has me a confused. Consider this code: array = [] for i in (1..5) array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] This makes sense to me because i is bound outside the loop, so the same variable is captured by each trip through the loop. It also makes sense to me that using an each block can fix this: array = [] (1..5).each{|i| array << lambda {i}} array.map{|f| f.call} => [1, 2, 3, 4, 5] ...because i is now being declared separately for each time through. But now I get lost: why can't I also fix it by introducing an intermediate variable? array = [] for i in 1..5 j = i array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] Because j is new each time through the loop, I'd think a different variable would be captured on each pass. For example, this is definitely how C# works, and how -- I think-- Lisp behaves with a let. But in Ruby not so much. It almost looks like = is aliasing the variable instead of copying the reference, but that's just speculation on my part. What's really happening?

    Read the article

  • Can you dynamically combine multiple conditional functions into one in Python?

    - by erich
    I'm curious if it's possible to take several conditional functions and create one function that checks them all (e.g. the way a generator takes a procedure for iterating through a series and creates an iterator). The basic usage case would be when you have a large number of conditional parameters (e.g. "max_a", "min_a", "max_b", "min_b", etc.), many of which could be blank. They would all be passed to this "function creating" function, which would then return one function that checked them all. Below is an example of a naive way of doing what I'm asking: def combining_function(max_a, min_a, max_b, min_b, ...): f_array = [] if max_a is not None: f_array.append( lambda x: x.a < max_a ) if min_a is not None: f_array.append( lambda x: x.a > min_a ) ... return lambda x: all( [ f(x) for f in f_array ] ) What I'm wondering is what is the most efficient to achieve what's being done above? It seems like executing a function call for every function in f_array would create a decent amount of overhead, but perhaps I'm engaging in premature/unnecessary optimization. Regardless, I'd be interested to see if anyone else has come across usage cases like this and how they proceeded. Also, if this isn't possible in Python, is it possible in other (perhaps more functional) languages?

    Read the article

  • problem using 'as_json' in my model and 'render :json' => in my controller (rails)

    - by patrick
    Hi everyone. I am trying to create a unique json data structure, and I have run into a problem that I can't seem to figure out. In my controller, I am doing: favorite_ids = Favorites.all.map(&:photo_id) data = { :albums => PhotoAlbum.all.to_json, :photos => Photo.all.to_json(:favorite => lambda {|photo| favorite_ids.include?(photo.id)}) } render :json => data and in my model: def as_json(options = {}) { :name => self.name, :favorite => options[:favorite].is_a?(Proc) ? options[:favorite].call(self) : options[:favorite] } end The problem is, rails encodes the values of 'photos' & 'albums' (in my data hash) as JSON twice, and this breaks everything... The only way I could get this to work is if I call 'as_json' instead of 'to_json': data = { :albums => PhotoAlbum.all.as_json, :photos => Photo.all.as_json(:favorite => lambda {|photo| favorite_ids.include?(photo.id)}) } However, when I do this, my :favorite = lambda option no longer makes it into the model's as_json method.......... So, I either need a way to tell 'render :json' not to encode the values of the hash so I can use 'to_json' on the values myself, or I need a way to get the parameters passed into 'as_json' to actually show up there....... I hope someone here can help... Thanks!

    Read the article

  • Mathematics for Computer Science Students

    - by Ender
    To cut a long story short, I am a CS student that has received no formal Post-16 Maths education for years. Right now even my Algebra is extremely rusty and I have a couple of months to shape up my skills. I've got a couple of video lectures in my bookmarks, consisting of: Pre-Calculus Algebra Calculus Probability Introduction to Statistics Differential Equations Linear Algebra My aim as of today is to be able to read the CLRS book Introduction to Algorithms and be able to follow the Mathematical notation in that, as well as being able to confidently read and back-up any arguments written in Mathematical notation. Aside from these video lectures, can anyone recommend any good books to help teach someone wishing to go from a low-foundation level to a more advanced level of Mathematics? Just as a note, I've taken a first-year module in Analytical Modelling, so I understand some of the basic concepts of Discrete Mathematics. EDIT: Just a note to those that are looking to learn Linear Algebra using the Video Lectures I have posted up. Peteris Krumins' Blog contains a run-through of these lecture notes as well as his own commentary and lecture notes, an invaluable resource for those looking to follow the lectures too.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >