Search Results

Search found 13867 results on 555 pages for 'avoid learning'.

Page 523/555 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • CSS menu items flickering in IE6

    - by Quick Joe Smith
    Edit #1: I have just discovered this flicker bug affects IE8 (and therefore most likely IE7) as well. I am putting together a pure-CSS dropdown menu (mostly a learning exercise) and have hit a point in IE where the submenu items are flickering as the mouse moves around within the <li> but outside the inner <a>. Source code is as follows: The included csshover3.htc is downloadable from Peter Nederlof's page for Whatever:hover <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>CSS Menu</title> <style type="text/css"> body { behavior: url("csshover3.htc"); } div#navbar { background-color:#333; font-size:1.4em; overflow:auto; } div#navbar ul { display:inline-block; /* ie6 float container bug */ list-style:none; margin:0px; padding:0px; } div#navbar ul.menu li { float:left; display:inline; /* ie6 double-margin bug */ } div#navbar ul.menu a { display:block; text-decoration:none; color:#fff; padding:5px 10px; } div#navbar ul li a:link, div#navbar ul li a:visited { text-decoration:none; } div#navbar ul li a:hover { color:#333; background-color:#f6c323; } div#navbar ul.menu ul { display:none; } div#navbar ul.menu li:hover ul { display:block; position:absolute; background-color:#333; } div#navbar ul.menu li:hover ul li { float:none; } div#navbar ul.menu li:hover ul ul { display:none; } div#navbar ul.menu li:hover li:hover { position:relative; } div#navbar ul.menu li:hover li:hover ul { display:block; position:absolute; left:100%; top:0; } </style> </head> <body> <h1>CSS Menu</h1> <div id="navbar"> <ul class="menu"> <li><a href="#">A</a></li> <li> <a>B</a> <ul> <li><a href="#">123</a></li> <li><a href="#">2</a></li> <li> <a>Tweee</a> <ul> <li><a href="#">Phwoar</a></li> <li><a href="#">Gr</a></li> </ul> </li> </ul> </li> <li><a href="#">C</a></li> </ul> </div> </body> </html> Live demo: http://jsfiddle.net/4q6Vw/ Any help is appreciated.

    Read the article

  • Generic Type constraint in .net

    - by Jose
    Okay I'm looking for some input, I'm pretty sure this is not currently supported in .NET 3.5 but here goes. I want to require a generic type passed into my class to have a constructor like this: new(IDictionary<string,object>) so the class would look like this public MyClass<T> where T : new(IDictionary<string,object>) { T CreateObject(IDictionary<string,object> values) { return new T(values); } } But the compiler doesn't support this, it doesn't really know what I'm asking. Some of you might ask, why do you want to do this? Well I'm working on a pet project of an ORM so I get values from the DB and then create the object and load the values. I thought it would be cleaner to allow the object just create itself with the values I give it. As far as I can tell I have two options: 1) Use reflection(which I'm trying to avoid) to grab the PropertyInfo[] array and then use that to load the values. 2) require T to support an interface like so: public interface ILoadValues { void LoadValues(IDictionary values); } and then do this public MyClass<T> where T:new(),ILoadValues { T CreateObject(IDictionary<string,object> values) { T obj = new T(); obj.LoadValues(values); return obj; } } The problem I have with the interface I guess is philosophical, I don't really want to expose a public method for people to load the values. Using the constructor the idea was that if I had an object like this namespace DataSource.Data { public class User { protected internal User(IDictionary<string,object> values) { //Initialize } } } As long as the MyClass<T> was in the same assembly the constructor would be available. I personally think that the Type constraint in my opinion should ask (Do I have access to this constructor? I do, great!) Anyways any input is welcome.

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • IOException during blocking network NIO in JDK 1.7

    - by Bass
    I'm just learning NIO, and here's the short example I've written to test how a blocking NIO can be interrupted: class TestBlockingNio { private static final boolean INTERRUPT_VIA_THREAD_INTERRUPT = true; /** * Prevent the socket from being GC'ed */ static Socket socket; private static SocketChannel connect(final int port) { while (true) { try { final SocketChannel channel = SocketChannel.open(new InetSocketAddress(port)); channel.configureBlocking(true); return channel; } catch (final IOException ioe) { try { Thread.sleep(1000); } catch (final InterruptedException ie) { } continue; } } } private static byte[] newBuffer(final int length) { final byte buffer[] = new byte[length]; for (int i = 0; i < length; i++) { buffer[i] = (byte) 'A'; } return buffer; } public static void main(final String args[]) throws IOException, InterruptedException { final int portNumber = 10000; new Thread("Reader") { public void run() { try { final ServerSocket serverSocket = new ServerSocket(portNumber); socket = serverSocket.accept(); /* * Fully ignore any input from the socket */ } catch (final IOException ioe) { ioe.printStackTrace(); } } }.start(); final SocketChannel channel = connect(portNumber); final Thread main = Thread.currentThread(); final Thread interruptor = new Thread("Inerruptor") { public void run() { System.out.println("Press Enter to interrupt I/O "); while (true) { try { System.in.read(); } catch (final IOException ioe) { ioe.printStackTrace(); } System.out.println("Interrupting..."); if (INTERRUPT_VIA_THREAD_INTERRUPT) { main.interrupt(); } else { try { channel.close(); } catch (final IOException ioe) { System.out.println(ioe.getMessage()); } } } } }; interruptor.setDaemon(true); interruptor.start(); final ByteBuffer buffer = ByteBuffer.allocate(32768); int i = 0; try { while (true) { buffer.clear(); buffer.put(newBuffer(buffer.capacity())); buffer.flip(); channel.write(buffer); System.out.print('X'); if (++i % 80 == 0) { System.out.println(); Thread.sleep(100); } } } catch (final ClosedByInterruptException cbie) { System.out.println("Closed via Thread.interrupt()"); } catch (final AsynchronousCloseException ace) { System.out.println("Closed via Channel.close()"); } } } In the above example, I'm writing to a SocketChannel, but noone is reading from the other side, so eventually the write operation hangs. This example works great when run by JDK-1.6, with the following output: Press Enter to interrupt I/O XXXX Interrupting... Closed via Thread.interrupt() — meaning that only 128k of data was written to the TCP socket's buffer. When run by JDK-1.7 (1.7.0_25-b15 and 1.7.0-u40-b37), however, the very same code bails out with an IOException: Press Enter to interrupt I/O XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXX Exception in thread "main" java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) at com.example.TestBlockingNio.main(TestBlockingNio.java:109) Can anyone explain this different behaviour?

    Read the article

  • functional dependencies vs type families

    - by mhwombat
    I'm developing a framework for running experiments with artificial life, and I'm trying to use type families instead of functional dependencies. Type families seems to be the preferred approach among Haskellers, but I've run into a situation where functional dependencies seem like a better fit. Am I missing a trick? Here's the design using type families. (This code compiles OK.) {-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u => a -> StateT u IO a -- plus other functions class Universe u where type MyAgent u :: * withAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } defaultWithAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () defaultWithAgent = undefined -- stub -- plus default implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- .. and they'll also need to make SimpleUniverse an instance of Universe -- for their agent type. -- instance Universe SimpleUniverse where type MyAgent SimpleUniverse = Bug withAgent = defaultWithAgent -- boilerplate -- plus similar boilerplate for other functions Is there a way to avoid forcing my users to write those last two lines of boilerplate? Compare with the version using fundeps, below, which seems to make things simpler for my users. (The use of UndecideableInstances may be a red flag.) (This code also compiles OK.) {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u a => a -> StateT u IO a -- plus other functions class Universe u a | u -> a where withAgent :: Agent a => (a -> StateT u IO a) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } instance Universe SimpleUniverse a where withAgent = undefined -- stub -- plus implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- And now my users only have to write stuff like... -- u :: SimpleUniverse u = SimpleUniverse "mydir"

    Read the article

  • Template function as a template argument

    - by Kos
    I've just got confused how to implement something in a generic way in C++. It's a bit convoluted, so let me explain step by step. Consider such code: void a(int) { // do something } void b(int) { // something else } void function1() { a(123); a(456); } void function2() { b(123); b(456); } void test() { function1(); function2(); } It's easily noticable that function1 and function2 do the same, with the only different part being the internal function. Therefore, I want to make function generic to avoid code redundancy. I can do it using function pointers or templates. Let me choose the latter for now. My thinking is that it's better since the compiler will surely be able to inline the functions - am I correct? Can compilers still inline the calls if they are made via function pointers? This is a side-question. OK, back to the original point... A solution with templates: void a(int) { // do something } void b(int) { // something else } template<void (*param)(int) > void function() { param(123); param(456); } void test() { function<a>(); function<b>(); } All OK. But I'm running into a problem: Can I still do that if a and b are generics themselves? template<typename T> void a(T t) { // do something } template<typename T> void b(T t) { // something else } template< ...param... > // ??? void function() { param<SomeType>(someobj); param<AnotherType>(someotherobj); } void test() { function<a>(); function<b>(); } I know that a template parameter can be one of: a type, a template type, a value of a type. None of those seems to cover my situation. My main question is hence: How do I solve that, i.e. define function() in the last example? (Yes, function pointers seem to be a workaround in this exact case - provided they can also be inlined - but I'm looking for a general solution for this class of problems).

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • Abnormally disconnected TCP sockets and write timeout

    - by James
    Hello I will try to explain the problem in shortest possible words. I am using c++ builder 2010. I am using TIdTCPServer and sending voice packets to a list of connected clients. Everything works ok untill any client is disconnected abnormally, For example power failure etc. I can reproduce similar disconnect by cutting the ethernet connection of a connected client. So now we have a disconnected socket but as you know it is not yet detected at server side so server will continue to try to send data to that client too. But when server try to write data to that disconnected client ...... Write() or WriteLn() HANGS there in trying to write, It is like it is wating for somekind of Write timeout. This hangs the hole packet distribution process as a result creating a lag in data transmission to all other clients. After few seconds "Socket Connection Closed" Exception is raised and data flow continues. Here is the code try { EnterCriticalSection(&SlotListenersCriticalSection); for(int i=0;i<SlotListeners->Count;i++) { try { //Here the process will HANG for several seconds on a disconnected socket ((TIdContext*) SlotListeners->Objects[i])->Connection->IOHandler->WriteLn("Some DATA"); }catch(Exception &e) { SlotListeners->Delete(i); } } }__finally { LeaveCriticalSection(&SlotListenersCriticalSection); } Ok i already have a keep alive mechanism which disconnect the socket after n seconds of inactivity. But as you can imagine, still this mechnism cant sync exactly with this braodcasting loop because this braodcasting loop is running almost all the time. So is there any Write timeouts i can specify may be through iohandler or something ? I have seen many many threads about "Detecting disconnected tcp socket" but my problem is little different, i need to avoid that hangup for few seconds during the write attempt. So is there any solution ? Or should i consider using some different mechanism for such data broadcasting for example the broadcasting loop put the data packet in some kind of FIFO buffer and client threads continuously check for available data and pick and deliver it to themselves ? This way if one thread hangs it will not stop/delay the over all distribution thread. Any ideas please ? Thanks for your time and help. Regards Jams

    Read the article

  • jQuery AJAX POST gives undefined index

    - by Sebastian
    My eventinfo.php is giving the following output: <br /> <b>Notice</b>: Undefined index: club in <b>/homepages/19/d361310357/htdocs/guestvibe/wp-content/themes/yasmin/guestvibe/eventinfo.php</b> on line <b>11</b><br /> [] HTML (index.php): <select name="club" class="dropdown" id="club"> <?php getClubs(); ?> </select> jQuery (index.php): <script type="text/javascript"> $(document).ready(function() { $.ajax({ type: "POST", url: "http://www.guestvibe.com/wp-content/themes/yasmin/guestvibe/eventinfo.php", data: $('#club').serialize(), success: function(data) { $('#rightbox_inside').html('<h2>' + $('#club').val() + '<span style="font-size: 14px"> (' + data[0].day + ')</h2><hr><p><b>Entry:</b> ' + data[0].entry + '</p><p><b>Queue jump:</b> ' + data[0].queuejump + '</p><br><p><i>Guestlist closes at ' + data[0].closing + '</i></p>'); }, dataType: "json" }); }); $('#club').change(function(event) { $.ajax({ type: "POST", url: "http://www.guestvibe.com/wp-content/themes/yasmin/guestvibe/eventinfo.php", data: $(this).serialize(), success: function(data) { $('#rightbox_inside').hide().html('<h2>' + $('#club').val() + '<span style="font-size: 14px"> (' + data[0].day + ')</h2><hr><p><b>Entry:</b> ' + data[0].entry + '</p><p><b>Queue jump:</b> ' + data[0].queuejump + '</p><br><p><i>Guestlist closes at ' + data[0].closing + '</i></p>').fadeIn('500'); }, dataType: "json" }); }); </script> I can run alerts from the jQuery, so it is active. I've copied this as is from an old version of the website, but I've changed the file structure (through to move to WordPress) so I suspect the variables might not even be reaching eventinfo.php in the first place... index.php is in wp-content/themes/cambridge and eventinfo.php is in wp-content/themes/yasmin/guestvibe but I've tried to avoid structuring issues by referencing the URL in full. Any ideas?

    Read the article

  • Specializating a template function that takes a universal reference parameter

    - by David Stone
    How do I specialize a template function that takes a universal reference parameter? foo.hpp: template<typename T> void foo(T && t) // universal reference parameter foo.cpp template<> void foo<Class>(Class && class) { // do something complicated } Here, Class is no longer a deduced type and thus is Class exactly; it cannot possibly be Class &, so reference collapsing rules will not help me here. I could perhaps create another specialization that takes a Class & parameter (I'm not sure), but that implies duplicating all of the code contained within foo for every possible combination of rvalue / lvalue references for all parameters, which is what universal references are supposed to avoid. Is there some way to accomplish this? To be more specific about my problem in case there is a better way to solve it: I have a program that can connect to multiple game servers, and each server, for the most part, calls everything by the same name. However, they have slightly different versions for a few things. There are a few different categories that these things can be: a move, an item, etc. I have written a generic sort of "move string to move enum" set of functions for internal code to call, and my server interface code has similar functions. However, some servers have their own internal ID that they communicate with, some use strings, and some use both in different situations. Now what I want to do is make this a little more generic. I want to be able to call something like ServerNamespace::server_cast<Destination>(source). This would allow me to cast from a Move to a std::string or ServerMoveID. Internally, I may need to make a copy (or move from) because some servers require that I keep a history of messages sent. Universal references seem to be the obvious solution to this problem. The header file I'm thinking of right now would expose simply this: namespace ServerNamespace { template<typename Destination, typename Source> Destination server_cast(Source && source); } And the implementation file would define all legal conversions as template specializations.

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • Dynamically change MYSQL query within a PHP file using jQuery .post?

    - by John
    Hi, Been trying this for quite a while now and I need help. Basically I have a PHP file that queries database and I want to change the query based on a logged in users name. What happens on my site is that a user logs on with Twitter Oauth and I can display their details (twitter username etc.). I have a database which the user has added information to and I what I would like to happen is when the user logs in with Twitter Oauth, I could use jQuery to take the users username and update the mysql query to show only the results where the user_name = that particular users name. At the moment the mysql query is: "SELECT * FROM markers WHERE user_name = 'dave'" I've tried something like: "SELECT * FROM markers WHERE user_name = '$user_name'" And elsewhere in the PHP file I have $user_name = $_POST['user_name'];. In a separate file (the one in which the user is redirected to after they log in through Twitter) I have some jQuery like this: $(document).ready(function(){ $.post('phpsqlinfo_resultb.php',{user_name:"<?PHP echo $profile_name?>"})}); $profile_name has been defined earlier on that page. I know i'm clearly doing something wrong, i'm still learning. Is there a way to achieve what I want using jQuery to post the users username to the PHP file to change the mysql query to display only the results related to the user that is logged in. I've included the PHP file with the query below: <?php // create a new XML document //$doc = domxml_new_doc('1.0'); $doc = new DomDocument('1.0'); //$root = $doc->create_element('markers'); //$root = $doc->append_child($root); $root = $doc->createElement('markers'); $root = $doc->appendChild($root); $table_id = 'marker'; $user_name = $_POST['user_name']; // Make a MySQL Connection include("phpsqlinfo_addrow.php"); $result = mysql_query("SELECT * FROM markers WHERE user_name = '$user_name'") or die(mysql_error()); // process one row at a time //header("Content-type: text/xml"); header('Content-type: text/xml; charset=utf-8'); while($row = mysql_fetch_assoc($result)) { // add node for each row $occ = $doc->createElement($table_id); $occ = $root->appendChild($occ); $occ->setAttribute('lat', $row['lat']); $occ->setAttribute('lng', $row['lng']); $occ->setAttribute('type', $row['type']); $occ->setAttribute('user_name', utf8_encode($row['user_name'])); $occ->setAttribute('name', utf8_encode($row['name'])); $occ->setAttribute('tweet', utf8_encode($row['tweet'])); $occ->setAttribute('image', utf8_encode($row['image'])); } // while $xml_string = $doc->saveXML(); $user_name2->response; echo $xml_string; ?> This is for use with a google map mashup im trying to do. Many thanks if you can help me. If my question isn't clear enough, please say and i'll try to clarify for you. I'm sure this is a simple fix, i'm just relatively inexperienced to do it. Been at this for two days and i'm running out of time unfortunately.

    Read the article

  • C# Serialization Surrogate - Cannot access a disposed object

    - by crushhawk
    I have an image class (VisionImage) that is a black box to me. I'm attempting to serialize the image object to file using Serialization Surrogates as explained on this page. Below is my surrogate code. sealed class VisionImageSerializationSurrogate : ISerializationSurrogate { public void GetObjectData(Object obj, SerializationInfo info, StreamingContext context) { VisionImage image = (VisionImage)obj; byte[,] temp = image.ImageToArray().U8; info.AddValue("width", image.Width); info.AddValue("height", image.Height); info.AddValue("pixelvalues", temp, temp.GetType()); } public Object SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) { VisionImage image = (VisionImage)obj; Int32 width = info.GetInt32("width"); Int32 height = info.GetInt32("height"); byte[,] temp = new byte[height, width]; temp = (byte[,])info.GetValue("pixelvalues", temp.GetType()); PixelValue2D tempPixels = new PixelValue2D(temp); image.ArrayToImage(tempPixels); return image; } } I've stepped through it to write to binary. It seems to be working fine (file is getting bigger as the images are captured). I tried to test it read the file back in. The values read back in are correct as far as the "info" object is concerned. When I get to the line image.ArrayToImage(tempPixels); It throws the "Cannot access a disposed object" exception. Upon further inspection, the object and the resulting image are both marked as disposed. My code behind the form spawns an "acquisitionWorker" and runs the following code. void acquisitionWorker_LoadImages(object sender, DoWorkEventArgs e) { // This is the main function of the acquisition background worker thread. // Perform image processing here instead of the UI thread to avoid a // sluggish or unresponsive UI. BackgroundWorker worker = (BackgroundWorker)sender; try { uint bufferNumber = 0; // Loop until we tell the thread to cancel or we get an error. When this // function completes the acquisitionWorker_RunWorkerCompleted method will // be called. while (!worker.CancellationPending) { VisionImage savedImage = (VisionImage) formatter.Deserialize(fs); CommonAlgorithms.Copy(savedImage, imageViewer.Image); // Update the UI by calling ReportProgress on the background worker. // This will call the acquisition_ProgressChanged method in the UI // thread, where it is safe to update UI elements. Do not update UI // elements directly in this thread as doing so could result in a // deadlock. worker.ReportProgress(0, bufferNumber); bufferNumber++; } } catch (ImaqException ex) { // If an error occurs and the background worker thread is not being // cancelled, then pass the exception along in the result so that // it can be handled in the acquisition_RunWorkerCompleted method. if (!worker.CancellationPending) e.Result = ex; } } What am I missing here? Why would the object be immediately disposed?

    Read the article

  • Endianness conversion and g++ warnings

    - by SuperBloup
    I've got the following C++ code : template <int isBigEndian, typename val> struct EndiannessConv { inline static val fromLittleEndianToHost( val v ) { union { val outVal __attribute__ ((used)); uint8_t bytes[ sizeof( val ) ] __attribute__ ((used)); } ; outVal = v; std::reverse( &bytes[0], &bytes[ sizeof(val) ] ); return outVal; } inline static void convertArray( val v[], uint32_t size ) { // TODO : find a way to map the array for (uint32_t i = 0; i < size; i++) for (uint32_t i = 0; i < size; i++) v[i] = fromLittleEndianToHost( v[i] ); } }; Which work and has been tested (without the used attributes). When compiling I obtain the following errors from g++ (version 4.4.1) || g++ -Wall -Wextra -O3 -o t t.cc || t.cc: In static member function 'static val EndiannessConv<isBigEndian, val>::fromLittleEndianToHost(val)': t.cc|98| warning: 'used' attribute ignored t.cc|99| warning: 'used' attribute ignored || t.cc: In static member function 'static val EndiannessConv<isBigEndian, val>::fromLittleEndianToHost(val) [with int isBigEndian = 1, val = double]': t.cc|148| instantiated from here t.cc|100| warning: unused variable 'outVal' t.cc|100| warning: unused variable 'bytes' I've tried to use the following code : template <int size, typename valType> struct EndianInverser { /* should not compile */ }; template <typename valType> struct EndianInverser<4, valType> { static inline valType reverseEndianness( const valType &val ) { uint32_t castedVal = *reinterpret_cast<const uint32_t*>( &val ); castedVal = (castedVal & 0x000000FF << (3 * 8)) | (castedVal & 0x0000FF00 << (1 * 8)) | (castedVal & 0x00FF0000 >> (1 * 8)) | (castedVal & 0xFF000000 >> (3 * 8)); return *reinterpret_cast<valType*>( &castedVal ); } }; but it break when enabling optimizations due to the type punning. So, why does my used attribute got ignored? Is there a workaround to convert endianness (I rely on the enum to avoid type punning) in templates?

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Jquery - Loading a page with .load and selector doesn't execute script?

    - by PirateKitten
    I'm trying to load one page into another using the .load() method. This loaded page contains a script that I want to execute when it has finished loading. I've put together a barebones example to demonstrate: Index.html: <html> <head> <title>Jquery Test</title> <script type="text/javascript" src="script/jquery-1.3.2.min.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#nav a').click(function() { $('#contentHolder').load('content.html #toLoad', '', function() {}); return false; }); }); </script> </head> <body> <div id="nav"> <a href="content.html">Click me!</a> </div> <hr /> <div id="contentHolder"> Content loaded will go here </div> </body> </html> Content.html: <div id="toLoad"> This content is from content.html <div id="contentDiv"> This should fade away. </div> <script type="text/javascript"> $('#contentDiv').fadeOut('slow', function() {} ); </script> </div> When the link is clicked, the content should load and the second paragraph should fade away. However it doesn't execute. If I stick a simple alert("") in the script of content.html it doesn't execute either. However, if I do away with the #toLoad selector in the .load() call, it works fine. I am not sure why this is, as the block is clearly in the scope of the #toLoad div. I don't want to avoid using the selector, as in reality the content.html will be a full HTML page, and I'll only want a select part out of it. Any ideas? If the script from content.html was in the .load() callback, it works fine, however I obviously don't want that logic contained within index.html. I could possibly have the callback use .getScript() to load "content.html.js" afterwards and have the logic in there, that seems to work? I'd prefer to keep the script in content.html, if possible, so that it executes fine when loaded normally too. In fact, I might do this anyway, but I would like to know why the above doesn't work.

    Read the article

  • screnshot in android

    - by ujjawal
    The following is the code I am using to take a screen shot using GLSurfaceView. But I dont know why the onDraw() method in the GLSurfaceView.Renderer Class is not being called. Please if some one can look at the code below and point out what am I doing wrong.`public class MainActivity extends Activity { private GLSurfaceView mGLView; int x,y,w,h; Display disp; /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); // ToDo add your GUI initialization code here setContentView(R.layout.main); x=0; y=0; disp = getWindowManager().getDefaultDisplay(); w = disp.getWidth(); h = disp.getHeight(); mGLView = new ClearGLSurfaceView(this); } class ClearGLSurfaceView extends GLSurfaceView { public ClearGLSurfaceView(Context context) { super(context); setDebugFlags(DEBUG_CHECK_GL_ERROR | DEBUG_LOG_GL_CALLS); mRenderer = new ClearRenderer(); setRenderer(mRenderer); } ClearRenderer mRenderer; } class ClearRenderer implements GLSurfaceView.Renderer { public void onSurfaceCreated(GL10 gl, EGLConfig config) { // Do nothing special. } public void onSurfaceChanged(GL10 gl, int w, int h) { //gl.glViewport(0, 0, w, h); } public void onDrawFrame(GL10 gl) { //gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); int b[]=new int[w*(y+h)]; int bt[]=new int[w*h]; IntBuffer ib=IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib); for(int i=0, k=0; i<h; i++, k++) {//remember, that OpenGL bitmap is incompatible with Android bitmap //and so, some correction need. for(int j=0; j<w; j++) { int pix=b[i*w+j]; int pb=(pix>>16)&0xff; int pr=(pix<<16)&0x00ff0000; int pix1=(pix&0xff00ff00) | pr | pb; bt[(h-k-1)*w+j]=pix1; } } Bitmap bmp = Bitmap.createBitmap(bt, w, h,Bitmap.Config.ARGB_8888); try { File f = new File("/sdcard/testpicture.png"); f.createNewFile(); FileOutputStream fos=new FileOutputStream(f); bmp.compress(CompressFormat.PNG, 100, fos); try { fos.flush(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch(IOException e) { e.printStackTrace(); } } } } ` Please someone help me out. I have just started learning to work on android.

    Read the article

  • Java - is this an idiom or pattern, behavior classes with no state

    - by Berlin Brown
    I am trying to incorporate more functional programming idioms into my java development. One pattern that I like the most and avoids side effects is building classes that have behavior but they don't necessarily have any state. The behavior is locked into the methods but they only act on the parameters passed in. The code below is code I am trying to avoid: public class BadObject { private Map<String, String> data = new HashMap<String, String>(); public BadObject() { data.put("data", "data"); } /** * Act on the data class. But this is bad because we can't * rely on the integrity of the object's state. */ public void execute() { data.get("data").toString(); } } The code below is nothing special but I am acting on the parameters and state is contained within that class. We still may run into issues with this class but that is an issue with the method and the state of the data, we can address issues in the routine as opposed to not trusting the entire object. Is this some form of idiom? Is this similar to any pattern that you use? public class SemiStatefulOOP { /** * Private class implies that I can access the members of the <code>Data</code> class * within the <code>SemiStatefulOOP</code> class and I can also access * the getData method from some other class. * * @see Test1 * */ class Data { protected int counter = 0; public int getData() { return counter; } public String toString() { return Integer.toString(counter); } } /** * Act on the data class. */ public void execute(final Data data) { data.counter++; } /** * Act on the data class. */ public void updateStateWithCallToService(final Data data) { data.counter++; } /** * Similar to CLOS (Common Lisp Object System) make instance. */ public Data makeInstance() { return new Data(); } } // End of Class // Issues with the code above: I wanted to declare the Data class private, but then I can't really reference it outside of the class: I can't override the SemiStateful class and access the private members. Usage: final SemiStatefulOOP someObject = new SemiStatefulOOP(); final SemiStatefulOOP.Data data = someObject.makeInstance(); someObject.execute(data); someObject.updateStateWithCallToService(data);

    Read the article

  • C++: Trouble with Pointers, loop variables, and structs

    - by Rosarch
    Consider the following example: #include <iostream> #include <sstream> #include <vector> #include <wchar.h> #include <stdlib.h> using namespace std; struct odp { int f; wchar_t* pstr; }; int main() { vector<odp> vec; ostringstream ss; wchar_t base[5]; wcscpy_s(base, L"1234"); for (int i = 0; i < 4; i++) { odp foo; foo.f = i; wchar_t loopStr[1]; foo.pstr = loopStr; // wchar_t* = wchar_t ? Why does this work? foo.pstr[0] = base[i]; vec.push_back(foo); } for (vector<odp>::iterator iter = vec.begin(); iter != vec.end(); iter++) { cout << "Vec contains: " << iter->f << ", " << *(iter->pstr) << endl; } } This produces: Vec contains: 0, 52 Vec contains: 1, 52 Vec contains: 2, 52 Vec contains: 3, 52 I would hope that each time, iter->f and iter->pstr would yield a different result. Unfortunately, iter->pstr is always the same. My suspicion is that each time through the loop, a new loopStr is created. Instead of copying it into the struct, I'm only copying a pointer. The location that the pointer writes to is getting overwritten. How can I avoid this? Is it possible to solve this problem without allocating memory on the heap?

    Read the article

  • WinForm-style Invoke() in unmanaged C++

    - by Matt Green
    I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.) However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things. A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function. By nicer, I'll mention that I'm specifically trying to avoid code of the following form: if(! this->IsUIThread()) { Invoke(MainWindowPresenter::OnTracksAdded, e); return; } being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it. My implementation consists of: DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries. UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage(). Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them. Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?

    Read the article

  • Javascript self contained sandbox events and client side stack

    - by amnon
    I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application . I've watched "scalable javascript application architecture" by Nicholas Zakas on yui theater (excellent video) and implemented much of the talk with good success but i have some questions : I found the lecture a little confusing in regards to the relationship between modules and sandboxes , on one had to my understanding modules should not be effected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core as they do access the core for hiding base libary) but each module in the application gets a new sandbox ? , shouldn't the sandbox limit events to the modoules using it ? or should events be published cross page ? e.g. : if i have two editable tables but i want to contain each one in a different sandbox and it's events effect only the modules inside that sandbox something like messabe box per table which is a different module/widget how can i do that with sandbox per module , ofcourse i can prefix the events with the moduleid but that creates coupling that i want to avoid ... and i don't want to package modules toghter as one module per combination as i already have 6-7 modules ? while i can hide the base library for small things like id selector etc.. i would still like to use the base library for module dependencies and resource loading and use something like yui loader or dojo.require so in fact i'm hiding the base library but the modules themself are defined and loaded by the base library ... seems a little strange to me libraries don't return simple js objects but usualy wrap them e.g. : u can do something like $$('.classname').each(.. which cleans the code alot , it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implemnting that functionality is very bug prone does anyonen have any experience with building a front side stack of this order ? how easy is it to change a base library and/or have modules from different libraries , using yui datatable but doing form validation with dojo ... ? some what of a combination of 2+4 if u choose to do something like i said and load dojo form validation widgets for inputs via yui loader would that mean dojocore is a module and the form module is dependant on it ? Thanks .

    Read the article

  • Java replacement for C macros

    - by thkala
    Recently I refactored the code of a 3rd party hash function from C++ to C. The process was relatively painless, with only a few changes of note. Now I want to write the same function in Java and I came upon a slight issue. In the C/C++ code there is a C preprocessor macro that takes a few integer variables names as arguments and performs a bunch of bitwise operations with their contents and a few constants. That macro is used in several different places, therefore its presence avoids a fair bit of code duplication. In Java, however, there is no equivalent for the C preprocessor. There is also no way to affect any basic type passed as an argument to a method - even autoboxing produces immutable objects. Coupled with the fact that Java methods return a single value, I can't seem to find a simple way to rewrite the macro. Avenues that I considered: Expand the macro by hand everywhere: It would work, but the code duplication could make things interesting in the long run. Write a method that returns an array: This would also work, but it would repeatedly result into code like this: long tmp[] = bitops(k, l, m, x, y, z); k = tmp[0]; l = tmp[1]; m = tmp[2]; x = tmp[3]; y = tmp[4]; z = tmp[5]; Write a method that takes an array as an argument: This would mean that all variable names would be reduced to array element references - it would be rather hard to keep track of which index corresponds to which variable. Create a separate class e.g. State with public fields of the appropriate type and use that as an argument to a method: This is my current solution. It allows the method to alter the variables, while still keeping their names. It has the disadvantage, however, that the State class will get more and more complex, as more macros and variables are added, in order to avoid copying values back and forth among different State objects. How would you rewrite such a C macro in Java? Is there a more appropriate way to deal with this, using the facilities provided by the standard Java 6 Development Kit (i.e. without 3rd party libraries or a separate preprocessor)?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >