Search Results

Search found 5311 results on 213 pages for 'begin'.

Page 185/213 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • Performance Tuning a High-Load Apache Server

    - by futureal
    I am looking to understand some server performance problems I am seeing with a (for us) heavily loaded web server. The environment is as follows: Debian Lenny (all stable packages + patched to security updates) Apache 2.2.9 PHP 5.2.6 Amazon EC2 large instance The behavior we're seeing is that the web typically feels responsive, but with a slight delay to begin handling a request -- sometimes a fraction of a second, sometimes 2-3 seconds in our peak usage times. The actual load on the server is being reported as very high -- often 10.xx or 20.xx as reported by top. Further, running other things on the server during these times (even vi) is very slow, so the load is definitely up there. Oddly enough Apache remains very responsive, other than that initial delay. We have Apache configured as follows, using prefork: StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 And KeepAlive as: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 Looking at the server-status page, even at these times of heavy load we are rarely hitting the client cap, usually serving between 80-100 requests and many of those in the keepalive state. That tells me to rule out the initial request slowness as "waiting for a handler" but I may be wrong. Amazon's CloudWatch monitoring tells me that even when our OS is reporting a load of 15, our instance CPU utilization is between 75-80%. Example output from top: top - 15:47:06 up 31 days, 1:38, 8 users, load average: 11.46, 7.10, 6.56 Tasks: 221 total, 28 running, 193 sleeping, 0 stopped, 0 zombie Cpu(s): 66.9%us, 22.1%sy, 0.0%ni, 2.6%id, 3.1%wa, 0.0%hi, 0.7%si, 4.5%st Mem: 7871900k total, 7850624k used, 21276k free, 68728k buffers Swap: 0k total, 0k used, 0k free, 3750664k cached The majority of the processes look like: 24720 www-data 15 0 202m 26m 4412 S 9 0.3 0:02.97 apache2 24530 www-data 15 0 212m 35m 4544 S 7 0.5 0:03.05 apache2 24846 www-data 15 0 209m 33m 4420 S 7 0.4 0:01.03 apache2 24083 www-data 15 0 211m 35m 4484 S 7 0.5 0:07.14 apache2 24615 www-data 15 0 212m 35m 4404 S 7 0.5 0:02.89 apache2 Example output from vmstat at the same time as the above: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 8 0 0 215084 68908 3774864 0 0 154 228 5 7 32 12 42 9 6 21 0 198948 68936 3775740 0 0 676 2363 4022 1047 56 16 9 15 23 0 0 169460 68936 3776356 0 0 432 1372 3762 835 76 21 0 0 23 1 0 140412 68936 3776648 0 0 280 0 3157 827 70 25 0 0 20 1 0 115892 68936 3776792 0 0 188 8 2802 532 68 24 0 0 6 1 0 133368 68936 3777780 0 0 752 71 3501 878 67 29 0 1 0 1 0 146656 68944 3778064 0 0 308 2052 3312 850 38 17 19 24 2 0 0 202104 68952 3778140 0 0 28 90 2617 700 44 13 33 5 9 0 0 188960 68956 3778200 0 0 8 0 2226 475 59 17 6 2 3 0 0 166364 68956 3778252 0 0 0 21 2288 386 65 19 1 0 And finally, output from Apache's server-status: Server uptime: 31 days 2 hours 18 minutes 31 seconds Total accesses: 60102946 - Total Traffic: 974.5 GB CPU Usage: u209.62 s75.19 cu0 cs0 - .0106% CPU load 22.4 requests/sec - 380.3 kB/second - 17.0 kB/request 107 requests currently being processed, 6 idle workers C.KKKW..KWWKKWKW.KKKCKK..KKK.KKKK.KK._WK.K.K.KKKKK.K.R.KK..C.C.K K.C.K..WK_K..KKW_CK.WK..W.KKKWKCKCKW.W_KKKKK.KKWKKKW._KKK.CKK... KK_KWKKKWKCKCWKK.KKKCK.......................................... ................................................................ From my limited experience I draw the following conclusions/questions: We may be allowing far too many KeepAlive requests I do see some time spent waiting for IO in the vmstat although not consistently and not a lot (I think?) so I am not sure this is a big concern or not, I am less experienced with vmstat Also in vmstat, I see in some iterations a number of processes waiting to be served, which is what I am attributing the initial page load delay on our web server to, possibly erroneously We serve a mixture of static content (75% or higher) and script content, and the script content is often fairly processor intensive, so finding the right balance between the two is important; long term we want to move statics elsewhere to optimize both servers but our software is not ready for that today I am happy to provide additional information if anybody has any ideas, the other note is that this is a high-availability production installation so I am wary of making tweak after tweak, and is why I haven't played with things like the KeepAlive value myself yet.

    Read the article

  • Cisco VPNClient from Mac won't connect using iPhone Tethering

    - by Dan Short
    I just set up iPhone tethering from my Snow Leopard Macbook Pro to my iPhone 3GS with the Datapro 4GB plan from AT&T. When attempting to connect to my corporate VPN from the MacBook Pro with Cisco VPNClient 4.9.01 (0100) I get the following log information: Cisco Systems VPN Client Version 4.9.01 (0100) Copyright (C) 1998-2006 Cisco Systems, Inc. All Rights Reserved. Client Type(s): Mac OS X Running on: Darwin 10.6.0 Darwin Kernel Version 10.6.0: Wed Nov 10 18:13:17 PST 2010; root:xnu-1504.9.26~3/RELEASE_I386 i386 Config file directory: /etc/opt/cisco-vpnclient 1 13:02:50.791 02/22/2011 Sev=Info/4 CM/0x43100002 Begin connection process 2 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0AD337FF, Src Addr: 0x0AD33702 (DRVIFACE:1158). 3 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0A2581FF, Src Addr: 0x0A258102 (DRVIFACE:1158). 4 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100004 Establish secure connection using Ethernet 5 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100024 Attempt connection with server "209.235.253.115" 6 13:02:50.792 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (500). 7 13:02:50.793 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (4500). 8 13:02:50.793 02/22/2011 Sev=Info/6 IKE/0x4300003B Attempting to establish a connection with 209.235.253.115. 9 13:02:51.293 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 10 13:02:51.894 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 11 13:02:52.495 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 12 13:02:53.096 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 13 13:02:53.698 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 14 13:02:54.299 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. 16 13:02:54.299 02/22/2011 Sev=Warning/2 IKE/0xC300009A Failed to set up connection data 17 13:02:54.299 02/22/2011 Sev=Info/4 CM/0x4310001C Unable to contact server "209.235.253.115" 18 13:02:54.299 02/22/2011 Sev=Info/5 CM/0x43100025 Initializing CVPNDrv 19 13:02:54.300 02/22/2011 Sev=Info/4 CVPND/0x4340001F Privilege Separation: restoring MTU on primary interface. 20 13:02:54.300 02/22/2011 Sev=Info/4 IKE/0x43000001 IKE received signal to terminate VPN connection 21 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700008 IPSec driver successfully started 22 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 23 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000D Key(s) deleted by Interface (192.168.0.171) 24 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 25 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 26 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 27 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000A IPSec driver successfully stopped The key line is 15: 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. I can't find anything online about this. I found a single entry for the error message in Google, and it was a swedish (or some other nordic language site) that didn't have an answer to the question. I've tried connecting through both USB and Bluetooth tethering to the iPhone, and they both return the exact same results. I don't have direct control over the firewall, but if changes are necessary to make it work, I may be able to get the powers-that-be to make adjustments. A solution that doesn't require reconfiguring the firewall would be far better of course... Does anyone know what I can do to make this behave? Thanks, Dan

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Windows XP Video Configuration Issues

    - by Matt
    Recently I had my motherboard burn out on me. Needing the machine for work, I purchased a different motherboard and installed that. Generally a reinstall of windows is good at that point but I am not in a position to do that so I just decided I would live with it for now. When I can log-in, everything works fine, what doesn't is getting to the log-in prompt to begin with. Basically when I first installed the new mobo, every time I rebooted the machine, I would not get the windows login prompt. One of the monitors would receive a signal but the screen would be black. Moving the mouse would not show the cursor and hitting the up arrow key and typing my password and hitting enter (which will normally log you in without mouse) wouldn't change anything. I would then change the monitor configuration around (2 lcd's and a crt) and reboot and at least one of the monitors would work and display the login prompt. I could then go into display properties and turn on the other monitors. However if I rebooted again, I would get the black screen on one monitor again. I would then have to change the configuration again to one not used before and I could re-do the manual setup at that point. I think windows saves the configurations so I had to keep giving it new ones. Needless to say I've been trying to not turn off my machine. Early this week I actually got the prompt to come up without playing musical monitors. Thinking everything was getting better, I found no harm in rebooting to install the latest windows updates. Boy was I wrong. Now no matter what I do I can't get a windows log-in prompt to display. I've tried almost every conceivable combination. The new mobo has onboard video so I set that in the bios (yea bios screen always displays fine, its not until windows boots that there is a problem) to be the primary video. Still no luck. I have two other graphics cards in the machine which I'm using. Tried all kinds of configurations between those and on-board but still get this black screen of death. I read somewhere that deleting the video drivers would reset the configurations. I logged into safe mode (which works on one monitor), and uninstalled the display drivers. Still no luck and when I booted back into safe mode, it wanted to install new hardware and the display adapters weren't there as expected. Anyone have any ideas? A fresh install would be a pain and I might be getting my old board back from RMA soon so not sure I want to go through with that just yet. Only thing I can think of is to continue to try other combinations like physically removing the graphics cards. They are both EVGA 8600 cards and the windows boot screen does display fwiw.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Forwarding rsyslog to syslog-ng, with FQDN and facility separation

    - by Joshua Miller
    I'm attempting to configure my rsyslog clients to forward messages to my syslog-ng log repository systems. Forwarding messages works "out of the box", but my clients are logging short names, not FQDNs. As a result the messages on the syslog repo use short names as well, which is a problem because one can't determine which system the message originated from easily. My clients get their names through DHCP / DNS. I've tried a number of solutions trying to get this working, but without success. I'm using rsyslog 4.6.2 and syslog-ng 3.2.5. I've tried setting $PreserveFQDN on as the first directive in /etc/rsyslog.conf (and restarting rsyslog of course). It seems to have no effect. hostname --fqdn on the client returns the proper FQDN, so the problem isn't whether the system can actually figure out its own FQDN. $LocalHostName <fqdn> looked promising, but this directive isn't available in my version of rsyslog (Available since 4.7.4+, 5.7.3+, 6.1.3+). Upgrading isn't an option at the moment. Configuring the syslog-ng server to populate names based on reverse lookups via DNS isn't an option. There are complexities with reverse DNS and the public cloud. Specifying for the forwarder to use a custom template seems like a viable option at first glance. I can specify the following, which causes local logging to begin using the FQDN on the syslog-ng repo. $template MyTemplate, "%timestamp% <FQDN> %syslogtag%%msg%" $ActionForwardDefaultTemplate MyTemplate However, when I put this in place syslog-ng seems to be unable to categorize messages by facility or priority. Messages come in as FQDN, but everything is put in to user.log. When I don't use the custom template, messages are properly categorized under facility and priority, but with the short name. So, in summary, if I manually trick rsyslog into including the FQDN, priority and facility becomes lost details to syslog-ng. How can I get rsyslog to do FQDN logging which works properly going to a syslog-ng repository? rsyslog client config: $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) $ModLoad imklog.so # provides kernel logging support (previously done by rklogd) $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/boot.log $WorkDirectory /var/spool/rsyslog # where to place spool files $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down *.* @syslog-ng1.example.com *.* @syslog-ng2.example.com syslog-ng configuration (abridged for brevity): options { flush_lines (0); time_reopen (10); log_fifo_size (1000); long_hostnames (off); use_dns (no); use_fqdn (yes); create_dirs (no); keep_hostname (yes); }; source src { unix-stream("/dev/log"); internal(); udp(ip(0.0.0.0) port(514)); }; destination per_host_destination { file( "/var/log/syslog-ng/devices/$HOST/$FACILITY.log" owner("root") group("root") perm(0644) dir_owner(root) dir_group(root) dir_perm(0775) create_dirs(yes)); }; log { source(src); destination(per_facility_destination); };

    Read the article

  • Router 2wire, Slackware desktop in DMZ mode, iptables policy aginst ping, but still pingable

    - by user135501
    I'm in DMZ mode, so I'm firewalling myself, stealthy all ok, but I get faulty test results from Shields Up that there are pings. Yesterday I couldn't make a connection to game servers work, because ping block was enabled (on the router). I disabled it, but this persists even due to my firewall. What is the connection between me and my router in DMZ mode (for my machine, there is bunch of others too behind router firewall)? When it allows router affecting if I'm pingable or not and if router has setting not blocking ping, rules in my iptables for this scenario do not work. Please ignore commented rules, I do uncomment them as I want. These two should do the job right? iptables -A INPUT -p icmp --icmp-type echo-request -j DROP echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all Here are my iptables: #!/bin/sh # Begin /bin/firewall-start # Insert connection-tracking modules (not needed if built into the kernel). #modprobe ip_tables #modprobe iptable_filter #modprobe ip_conntrack #modprobe ip_conntrack_ftp #modprobe ipt_state #modprobe ipt_LOG # allow local-only connections iptables -A INPUT -i lo -j ACCEPT # free output on any interface to any ip for any service # (equal to -P ACCEPT) iptables -A OUTPUT -j ACCEPT # permit answers on already established connections # and permit new connections related to established ones (eg active-ftp) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT #Gamespy&NWN #iptables -A INPUT -p tcp -m tcp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 6667 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 28910 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29900 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29901 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p tcp -m tcp --dport 29920 --tcp-flags SYN,RST,ACK SYN -j ACCEPT #iptables -A INPUT -p udp -m udp -m multiport --ports 5120:5129 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 6500 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27900 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 27901 -j ACCEPT #iptables -A INPUT -p udp -m udp --dport 29910 -j ACCEPT # Log everything else: What's Windows' latest exploitable vulnerability? iptables -A INPUT -j LOG --log-prefix "FIREWALL:INPUT" # set a sane policy: everything not accepted > /dev/null iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP iptables -A INPUT -p icmp --icmp-type echo-request -j DROP # be verbose on dynamic ip-addresses (not needed in case of static IP) echo 2 > /proc/sys/net/ipv4/ip_dynaddr # disable ExplicitCongestionNotification - too many routers are still # ignorant echo 0 > /proc/sys/net/ipv4/tcp_ecn #ping death echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all # If you are frequently accessing ftp-servers or enjoy chatting you might # notice certain delays because some implementations of these daemons have # the feature of querying an identd on your box for your username for # logging. Although there's really no harm in this, having an identd # running is not recommended because some implementations are known to be # vulnerable. # To avoid these delays you could reject the requests with a 'tcp-reset': #iptables -A INPUT -p tcp --dport 113 -j REJECT --reject-with tcp-reset #iptables -A OUTPUT -p tcp --sport 113 -m state --state RELATED -j ACCEPT # To log and drop invalid packets, mostly harmless packets that came in # after netfilter's timeout, sometimes scans: #iptables -I INPUT 1 -p tcp -m state --state INVALID -j LOG --log-prefix \ "FIREWALL:INVALID" #iptables -I INPUT 2 -p tcp -m state --state INVALID -j DROP # End /bin/firewall-start

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • JPA exception: Object: ... is not a known entity type.

    - by Toto
    I'm new to JPA and I'm having problems with the autogeneration of primary key values. I have the following entity: package jpatest.entities; import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class MyEntity implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Long getId() { return id; } public void setId(Long id) { this.id = id; } private String someProperty; public String getSomeProperty() { return someProperty; } public void setSomeProperty(String someProperty) { this.someProperty = someProperty; } public MyEntity() { } public MyEntity(String someProperty) { this.someProperty = someProperty; } @Override public String toString() { return "jpatest.entities.MyEntity[id=" + id + "]"; } } and the following main method in other class: public static void main(String[] args) { EntityManagerFactory emf = Persistence.createEntityManagerFactory("JPATestPU"); EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); MyEntity e = new MyEntity("some value"); em.persist(e); /* (exception thrown here) */ em.getTransaction().commit(); em.close(); emf.close(); } This is my persistence unit: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="JPATestPU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <class>jpatest.entities.MyEntity</class> <properties> <property name="toplink.jdbc.user" value="..."/> <property name="toplink.jdbc.password" value="..."/> <property name="toplink.jdbc.url" value="jdbc:mysql://localhost:3306/jpatest"/> <property name="toplink.jdbc.driver" value="com.mysql.jdbc.Driver"/> <property name="toplink.ddl-generation" value="create-tables"/> </properties> </persistence-unit> </persistence> When I execute the program I get the following exception in the line marked with the proper comment: Exception in thread "main" java.lang.IllegalArgumentException: Object: jpatest.entities.MyEntity[id=null] is not a known entity type. at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerNewObjectForPersist(UnitOfWorkImpl.java:3212) at oracle.toplink.essentials.internal.ejb.cmp3.base.EntityManagerImpl.persist(EntityManagerImpl.java:205) at jpatest.Main.main(Main.java:...) What am I missing?

    Read the article

  • An offscreen MKMapView behaves differently in 3.2, 4.0

    - by Duane Fields
    In 3.1 I've been using an "offscreen" MKMapView to create map images that I can rotate, crop and so forth before presenting them the user. In 3.2 and 4.0 this technique no longer works quite right. Here's some code that illustrates the problem, followed by my theory. // create map view _mapView = [[MKMapView alloc] initWithFrame:CGRectMake(0, 0, MAP_FRAME_SIZE, MAP_FRAME_SIZE)]; _mapView.zoomEnabled = NO; _mapView.scrollEnabled = NO; _mapView.delegate = self; _mapView.mapType = MKMapTypeSatellite; // zoom in to something enough to fill the screen MKCoordinateRegion region; CLLocationCoordinate2D center = {30.267222, -97.763889}; region.center = center; MKCoordinateSpan span = {0.1, 0.1 }; region.span = span; _mapView.region = region; // set scrollview content size to full the imageView _scrollView.contentSize = _imageView.frame.size; // force it to load #ifndef __IPHONE_3_2 // in 3.1 we can render to an offscreen context to force a load UIGraphicsBeginImageContext(_mapView.frame.size); [_mapView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIGraphicsEndImageContext(); #else // in 3.2 and above, the renderInContext trick doesn't work... // this at least causes the map to render, but it's clipped to what appears to be // the viewPort size, plus some padding [self.view addSubview:_mapView]; #endif when the map is done loading, I snap picture of it and stuff it in my scrollview - (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView { NSLog(@"[MapBuilder] mapViewDidFinishLoadingMap"); // render the map to a UIImage UIGraphicsBeginImageContext(mapView.bounds.size); // the first sub layer is just the map, the second is the google layer, this sublayer structure might change of course [[[mapView.layer sublayers] objectAtIndex:0] renderInContext:UIGraphicsGetCurrentContext()]; // we are done with the mapView at this point, we need its ram! _mapView.delegate = nil; [_mapView release]; [_mapView removeFromSuperview]; _mapView = nil; UIImage* mapImage = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); _imageView.image = mapImage; [mapImage release], mapImage = nil; } The first problem is that in 3.1 rendering to a context would trigger the map to begin loading. This no longer works in 3.2, 4.0. The only thing I have found would trigger the load is to temporarily add the map to the view (i.e. make it visible). The problem being that the map only renders to the visible area of the screen, plus a little padding. The frame/bounds are fine, but it appears to be "helpfully" optimizes the loading to limit the tiles to those visible on the screen or close to it. Any ideas how to force the map to load at full size? Anyone else have this issue?

    Read the article

  • Json.Net Issues: StackOverflowException is thrown when serialising circular dependent ISerializable object with ReferenceLoopHandling.Ignore

    - by keyr
    I have a legacy application that used binary serialisation to persist the data. Now we wanted to use Json.net 4.5 to serialise the data without much changes to the existing classes. Things were working nice till we hit a circular dependent class. Is there any workaround to solve this problem? Sample code as shown below [Serializable] class Department : ISerializable { public Employee Manager { get; set; } public string Name { get; set; } public Department() { } public Department( SerializationInfo info, StreamingContext context ) { Manager = ( Employee )info.GetValue( "Manager", typeof( Employee ) ); Name = ( string )info.GetValue( "Name", typeof( string ) ); } public void GetObjectData( SerializationInfo info, StreamingContext context ) { info.AddValue( "Manager", Manager ); info.AddValue( "Name", Name ); } } [Serializable] class Employee : ISerializable { [NonSerialized] //This does not work [XmlIgnore]//This does not work private Department mDepartment; public Department Department { get { return mDepartment; } set { mDepartment = value; } } public string Name { get; set; } public Employee() { } public Employee( SerializationInfo info, StreamingContext context ) { Department = ( Department )info.GetValue( "Department", typeof( Department ) ); Name = ( string )info.GetValue( "Name", typeof( string ) ); } public void GetObjectData( SerializationInfo info, StreamingContext context ) { info.AddValue( "Department", Department ); info.AddValue( "Name", Name ); } } And the test code Department department = new Department(); department.Name = "Dept1"; Employee emp1 = new Employee { Name = "Emp1", Department = department }; department.Manager = emp1; Employee emp2 = new Employee() { Name = "Emp2", Department = department }; IList<Employee> employees = new List<Employee>(); employees.Add( emp1 ); employees.Add( emp2 ); var memoryStream = new MemoryStream(); var formatter = new BinaryFormatter(); formatter.Serialize( memoryStream, employees ); memoryStream.Seek( 0, SeekOrigin.Begin ); IList<Employee> deserialisedEmployees = formatter.Deserialize( memoryStream ) as IList<Employee>; //Works nicely JsonSerializerSettings jsonSS= new JsonSerializerSettings(); jsonSS.TypeNameHandling = TypeNameHandling.Objects; jsonSS.TypeNameAssemblyFormat = FormatterAssemblyStyle.Full; jsonSS.Formatting = Formatting.Indented; jsonSS.ReferenceLoopHandling = ReferenceLoopHandling.Ignore; //This is not working!! //jsonSS.ReferenceLoopHandling = ReferenceLoopHandling.Serialize; //This is also not working!! jsonSS.PreserveReferencesHandling = PreserveReferencesHandling.All; string jsonAll = JsonConvert.SerializeObject( employees, jsonSS ); //Throws stackoverflow exception Edit1: The issue has been reported to Json (http://json.codeplex.com/workitem/23668)

    Read the article

  • Getting output parameter(SYS_REFCURSOR) from Oracle stored procedure in iBATIS 3(by using annotation

    - by yjacket
    I got an example how to call oracle SP in iBATIS 3 without a map file. And now I understand how to call it. But I got another problem that how to get a result from output parameter(Oracle cursor). A part of exception messages is "There is no setter for property named 'rs' in 'class java.lang.Class". Below is my code. Does anyone can help me? Oracle Stored Procedure: CREATE OR REPLACE PROCEDURE getProducts ( rs OUT SYS_REFCURSOR ) IS BEGIN OPEN rs FOR SELECT * FROM Products; END getProducts; Interface: public interface ProductMapper { @Select("call getProducts(#{rs,mode=OUT,jdbcType=CURSOR})") @Options(statementType = StatementType.CALLABLE) List<Product> getProducts(); } DAO: public class ProductDAO { public List<Product> getProducts() { return mapper.getProducts(); // mapper is ProductMapper } } Full Error Message: Exception in thread "main" org.apache.ibatis.exceptions.IbatisException: ### Error querying database. Cause: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' ### The error may involve defaultParameterMap ### The error occurred while setting parameters ### Cause: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:8) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:61) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:53) at org.apache.ibatis.binding.MapperMethod.executeForList(MapperMethod.java:82) at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:63) at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:35) at $Proxy8.getList(Unknown Source) at com.dao.ProductDAO.getList(ProductDAO.java:42) at com.Ibatis3Test.main(Ibatis3Test.java:30) Caused by: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:154) at org.apache.ibatis.reflection.wrapper.BeanWrapper.set(BeanWrapper.java:36) at org.apache.ibatis.reflection.MetaObject.setValue(MetaObject.java:120) at org.apache.ibatis.executor.resultset.FastResultSetHandler.handleOutputParameters(FastResultSetHandler.java:69) at org.apache.ibatis.executor.statement.CallableStatementHandler.query(CallableStatementHandler.java:44) at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:55) at org.apache.ibatis.executor.SimpleExecutor.doQuery(SimpleExecutor.java:41) at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:94) at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:72) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:59) ... 7 more Caused by: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.reflection.Reflector.getSetInvoker(Reflector.java:300) at org.apache.ibatis.reflection.MetaClass.getSetInvoker(MetaClass.java:97) at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:146) ... 16 more

    Read the article

  • WPF: Animating TranslateTransform from code

    - by ghostskunks
    I have a WPF canvas on which I'm dynamically creating objects from code. These objects are being transformed by setting the RenderTransform property, and an animation needs to be applied one of those transforms. Currently, I can't get properties of any transform to animate (although no exception gets raised and the animation appears to run - the completed event gets raised). In addition, if the animation system is stressed, sometimes the Storyboard.Completed event is never raised. All the examples I've come accross animate the transforms from XAML. MSDN documentation suggests that the x:Name property of a transform must be set for it to be animatable, but I haven't found a working way to set it from code. Any ideas? Here's the full code listing that reproduces the problem: using System; using System.Diagnostics; using System.Windows; using System.Windows.Controls; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; namespace AnimationCompletedTest { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { Canvas panel; public MainWindow() { InitializeComponent(); MouseDown += DoDynamicAnimation; Content = panel = new Canvas(); } void DoDynamicAnimation(object sender, MouseButtonEventArgs args) { for (int i = 0; i < 12; ++i) { var e = new Ellipse { Width = 16, Height = 16, Fill = SystemColors.HighlightBrush }; Canvas.SetLeft(e, Mouse.GetPosition(this).X); Canvas.SetTop(e, Mouse.GetPosition(this).Y); var tg = new TransformGroup(); var translation = new TranslateTransform(30, 0); tg.Children.Add(translation); tg.Children.Add(new RotateTransform(i * 30)); e.RenderTransform = tg; panel.Children.Add(e); var s = new Storyboard(); Storyboard.SetTarget(s, translation); Storyboard.SetTargetProperty(s, new PropertyPath(TranslateTransform.XProperty)); s.Children.Add( new DoubleAnimation(3, 100, new Duration(new TimeSpan(0, 0, 0, 1, 0))) { EasingFunction = new PowerEase {EasingMode = EasingMode.EaseOut} }); s.Completed += (sndr, evtArgs) => { Debug.WriteLine("Animation {0} completed {1}", s.GetHashCode(), Stopwatch.GetTimestamp()); panel.Children.Remove(e); }; Debug.WriteLine("Animation {0} started {1}", s.GetHashCode(), Stopwatch.GetTimestamp()); s.Begin(); } } [STAThread] public static void Main() { var app = new Application(); app.Run(new MainWindow()); } } }

    Read the article

  • CheckBox Command Behaviors for Silverlight MVVM Pattern

    - by Blake Blackwell
    I am trying to detect when an item is checked, and which item is checked in a ListBox using Silverlight 4 and the Prism framework. I found this example on creating behaviors, and tried to follow it but nothing is happening in the debugger. I have three questions: Why isn't my command executing? How do I determine which item was checked (i.e. pass a command parameter)? How do I debug this? (i.e. where can I put break points to begin stepping into this) Here is my code: View: <ListBox x:Name="MyListBox" ItemsSource="{Binding PanelItems, Mode=TwoWay}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <CheckBox IsChecked="{Binding Enabled}" my:Checked.Command="{Binding Check}" /> <TextBlock x:Name="DisplayName" Text="{Binding DisplayName}"/> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> ViewModel: public MainPageViewModel() { _panelItems.Add( new PanelItem { Enabled = true, DisplayName = "Test1" } ); Check = new DelegateCommand<object>( itemChecked ); } public void itemChecked( object o ) { //do some stuff } public DelegateCommand<object> Check { get; set; } Behavior Class public class CheckedBehavior : CommandBehaviorBase<CheckBox> { public CheckedBehavior( CheckBox element ) : base( element ) { element.Checked +=new RoutedEventHandler(element_Checked); } void element_Checked( object sender, RoutedEventArgs e ) { base.ExecuteCommand(); } } Command Class public static class Checked { public static ICommand GetCommand( DependencyObject obj ) { return (ICommand) obj.GetValue( CommandProperty ); } public static void SetCommand( DependencyObject obj, ICommand value ) { obj.SetValue( CommandProperty, value ); } public static readonly DependencyProperty CommandProperty = DependencyProperty.RegisterAttached( "Command", typeof( CheckBox ), typeof( Checked ), new PropertyMetadata( OnSetCommandCallback ) ); public static readonly DependencyProperty CheckedCommandBehaviorProperty = DependencyProperty.RegisterAttached( "CheckedCommandBehavior", typeof( CheckedBehavior ), typeof( Checked ), null ); private static void OnSetCommandCallback( DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e ) { CheckBox element = dependencyObject as CheckBox; if( element != null ) { CheckedBehavior behavior = GetOrCreateBehavior( element ); behavior.Command = e.NewValue as ICommand; } } private static CheckedBehavior GetOrCreateBehavior( CheckBox element ) { CheckedBehavior behavior = element.GetValue( CheckedCommandBehaviorProperty ) as CheckedBehavior; if( behavior == null ) { behavior = new CheckedBehavior( element ); element.SetValue( CheckedCommandBehaviorProperty, behavior ); } return behavior; } public static CheckedBehavior GetCheckCommandBehavior( DependencyObject obj ) { return (CheckedBehavior) obj.GetValue( CheckedCommandBehaviorProperty ); } public static void SetCheckCommandBehavior( DependencyObject obj, CheckedBehavior value ) { obj.SetValue( CheckedCommandBehaviorProperty, value ); } } I used this article to get me started, but I'll readily admit this is over my head.

    Read the article

  • Unable to run Wix Custom Action in MSI

    - by Grandpappy
    I'm trying to create a custom action for my Wix install, and it's just not working, and I'm unsure why. Here's the bit in the appropriate Wix File: <Binary Id="INSTALLERHELPER" SourceFile=".\Lib\InstallerHelper.dll" /> <CustomAction Id="HelperAction" BinaryKey="INSTALLERHELPER" DllEntry="CustomAction1" Execute="immediate" /> Here's the full class file for my custom action: using Microsoft.Deployment.WindowsInstaller; namespace InstallerHelper { public class CustomActions { [CustomAction] public static ActionResult CustomAction1(Session session) { session.Log("Begin CustomAction1"); return ActionResult.Success; } } } The action is run by a button press in the UI (for now): <Control Id="Next" Type="PushButton" X="248" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)" > <Publish Event="DoAction" Value="HelperAction">1</Publish> </Control> When I run the MSI, I get this error in the log: MSI (c) (08:5C) [10:08:36:978]: Connected to service for CA interface. MSI (c) (08:4C) [10:08:37:030]: Note: 1: 1723 2: SQLHelperAction 3: CustomAction1 4: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp MSI (c) (08:4C) [10:08:38:501]: Product: SessionWorks :: Judge Edition -- Error 1723. There is a problem with this Windows Installer package. A DLL required for this install to complete could not be run. Contact your support personnel or package vendor. Action SQLHelperAction, entry: CustomAction1, library: C:\Users\NATHAN~1.TYL\AppData\Local\Temp\MSI684F.tmp Action ended 10:08:38: SQLHelperAction. Return value 3. DEBUG: Error 2896: Executing action SQLHelperAction failed. The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2896. The arguments are: SQLHelperAction, , Neither of the two error codes or messages it gives me is enough to tell me what's wrong. Or perhaps I'm just not understanding what they're saying is wrong. At first I thought it might be because I was using Wix 3.5, so just to be sure I tried using Wix 3.0, but I get the same error. Any ideas on what I'm doing wrong?

    Read the article

  • SSRS Report from Oracle DB - Use stored procedure

    - by Emtucifor
    I am developing a report in Sql Server Reporting Services 2005, connecting to an Oracle 11g database. As you post replies perhaps it will help to know that I'm skilled in MSSQL Server and inexperienced in Oracle. I have multiple nested subreports and need to use summary data in outer reports and the same data but in detail in the inner reports. In order to spare the DB server from multiple executions, I thought to populate some temp tables at the beginning and then query just them the multiple times in the report and the subreports. In SSRS, Datasets are evidently executed in the order they appear in the RDL file. And you can have a dataset that doesn't return a rowset. So I created a stored procedure to populate my four temp tables and made this the first Dataset in my report. This SP works when I run it from SQLDeveloper and I can query the data from the temp tables. However, this didn't appear to work out because SSRS was apparently not reusing the same session, so even though the global temporary tables were created with ON COMMIT PRESERVE ROWS my Datasets were empty. I switched to using "real" tables and am now passing in an additional parameter, a GUID in string form, uniquely generated on each new execution, that is part of the primary key of each table, so I can get back just the rows for this execution. Running this from Sql Developer works fine, example: DECLARE ActivityCode varchar2(15) := '1208-0916 '; ExecutionID varchar2(32) := SYS_GUID(); BEGIN CIPProjectBudget (ActivityCode, ExecutionID); END; Never mind that in this example I don't know the GUID, this simply proves it works because rows are inserted to my four tables. But in the SSRS report, I'm still getting no rows in my Datasets and SQL Developer confirms no rows are being inserted. So I'm thinking along the lines of: Oracle uses implicit transactions and my changes aren't getting committed? Even though I can prove that the non-rowset returning SP is executing (because if I leave out the parameter mapping it complains at report rendering time about not having enough parameters) perhaps it's not really executing. Somehow. Wrong execution order isn't the problem or rows would appear in the tables, and they aren't. I'm interested in any ideas about how to accomplish this (especially the part about not running the main queries multiple times). I'll redesign my whole report. I'll stop using a stored procedure. Suggest anything you like! I just need help getting this working and I am stuck. If you want more details, in my SSRS report I have a List object (it's a container that repeats once for each row in a Dataset) that has some header values and then contains a subreport. Eventually, there will be four total reports: one main report, with three nested subreports. Each subreport will be in a List on the parent report.

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • IdHttp Post Method Delphi 2010

    - by Dusten S
    Like others before me, I'm having troubles using the IdHttp(Indy 10.5.5) component in Delphi 2010. The code works fine in Delphi 7: var XMLString : AnsiString; lService : AnsiString; ResponseStream: TMemoryStream; InputStringList : TStringList; begin ResponseStream := TMemoryStream.Create; InputStringList := TStringList.Create; XMLString :='<?xml version="1.0" encoding="ISO-8859-1"?> '+ '<!DOCTYPE pnet_imessage_send PUBLIC "-//PeopleNet//pnet_imessage_send" "http://open.peoplenetonline.com/dtd/pnet_imessage_send.dtd"> '+ '<pnet_imessage_send> '+ ' <cid>username</cid> '+ ' <pw>password</pw> '+ ' <vehicle_number>tr01</vehicle_number> '+ ' <deliver>now</deliver> '+ ' <action> '+ ' <action_type>reply_with_freeform</action_type> '+ ' <urgent_reply>yes</urgent_reply> '+ ' </action> '+ ' <freeform_message>Test Message Version 2</freeform_message> '+ '</pnet_imessage_send> '; lService := 'imessage_send'; InputStringList.Values['service'] := lService; InputStringList.Values['xml'] := XMLString; try IdHttp1.Request.Accept := '*/*'; IdHttp1.Request.ContentType := 'text/XML'; IdHTTP1.Post('http://open.peoplenetonline.com/scripts/open.dll', InputStringList, ResponseStream); ... finally ResponseStream.Free; InputStringList.Free; end; The only differences so far between this and the D7 code is that I've changed the String types to AnsiString, and added the HTTP Request properties. The response I get back from the server is 'XML failed to parse. Whitespace expected at Line:1 Position: 19', I'm assuming the XML got garbled up somewhere in the process, but I can't figure our where I'm going wrong. Any ideas?

    Read the article

  • Speech Recognition Grammar Rules using delphi code

    - by XBasic3000
    I need help to make ISeechRecoGrammar without using xml format. Like creating it on runtime on delphi. example: procedure TForm1.FormCreate(Sender: TObject); var AfterCmdState: ISpeechGrammarRuleState; temp : OleVariant; Grammar: ISpeechRecoGrammar; PropertiesRule: ISpeechGrammarRule; ItemRule: ISpeechGrammarRule; TopLevelRule: ISpeechGrammarRule; begin SpSharedRecoContext.EventInterests := SREAllEvents; Grammar := SpSharedRecoContext.CreateGrammar(m_GrammarId); TopLevelRule := Grammar.Rules.Add('TopLevelRule', SRATopLevel Or SRADynamic, 1); PropertiesRule := Grammar.Rules.Add('PropertiesRule', SRADynamic, 2); ItemRule := Grammar.Rules.Add('ItemRule', SRADynamic, 3); AfterCmdState := TopLevelRule.AddState; TopLevelRule.InitialState.AddWordTransition(AfterCmdState, 'test', temp, temp, '****', 0, temp, temp); Grammar.Rules.Commit; Grammar.CmdSetRuleState('TopLevelRule', SGDSActive); end; can someone reconstruct or midify this delphi code (above) to be exactly same function below(xml). <GRAMMAR LANGID="409"> <!-- "Constant" definitions --> <DEFINE> <ID NAME="RID_start" VAL="1"/> <ID NAME="PID_action" VAL="2"/> <ID NAME="PID_actionvalue" VAL="3"/> </DEFINE> <!-- Rule definitions --> <RULE NAME="start" ID="RID_start" TOPLEVEL="ACTIVE"> <P>i am</P> <RULEREF NAME="action" PROPNAME="action" PROPID="PID_action" /> <O>OK</O> </RULE> <RULE NAME="action"> <L PROPNAME="actionvalue" PROPID="PID_actionvalue"> <P VAL="1">albert</P> <P VAL="2">francis</P> <P VAL="3">alex</P> </L> </RULE> </GRAMMAR> sorry for my english...

    Read the article

  • Prevent recursive CTE visiting nodes multiple times

    - by bacar
    Consider the following simple DAG: 1->2->3->4 And a table, #bar, describing this (I'm using SQL Server 2005): parent_id child_id 1 2 2 3 3 4 //... other edges, not connected to the subgraph above Now imagine that I have some other arbitrary criteria that select the first and last edges, i.e. 1-2 and 3-4. I want to use these to find the rest of my graph. I can write a recursive CTE as follows (I'm using terminology from MSDN): with foo(parent_id,child_id) as ( // anchor member that happens to select first and last edges: select parent_id,child_id from #bar where parent_id in (1,3) union all // recursive member: select #bar.* from #bar join foo on #bar.parent_id = foo.child_id ) select parent_id,child_id from foo However, this results in edge 3-4 being selected twice: parent_id child_id 1 2 3 4 2 3 3 4 // 2nd appearance! How can I prevent the query from recursing into subgraphs that have already been described? I could achieve this if, in my "recursive member" part of the query, I could reference all data that has been retrieved by the recursive CTE so far (and supply a predicate indicating in the recursive member excluding nodes already visited). However, I think I can access data that was returned by the last iteration of the recursive member only. This will not scale well when there is a lot of such repetition. Is there a way of preventing this unnecessary additional recursion? Note that I could use "select distinct" in the last line of my statement to achieve the desired results, but this seems to be applied after all the (repeated) recursion is done, so I don't think this is an ideal solution. Edit - hainstech suggests stopping recursion by adding a predicate to exclude recursing down paths that were explicitly in the starting set, i.e. recurse only where foo.child_id not in (1,3). That works for the case above only because it simple - all the repeated sections begin within the anchor set of nodes. It doesn't solve the general case where they may not be. e.g., consider adding edges 1-4 and 4-5 to the above set. Edge 4-5 will be captured twice, even with the suggested predicate. :(

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >