Search Results

Search found 9229 results on 370 pages for 'color theory'.

Page 339/370 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Bash Completion Script Help

    - by inxilpro
    So I'm just starting to learn about bash completion scripts, and I started to work on one for a tool I use all the time. First I built the script using a set list of options: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions="change configure create disable enable show" COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf This works fine. Next, I decided to dynamically create the list of available actions. I put together the following command: zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' Which basically does the following: Grabs all the text in the "zf" command after "Providers and their actions:" Grabs all the lines that start with "zf" (I had to do some fancy work here 'cause the ZF command prints in color) Grab the second piece of the command and remove any spaces from it (the spaces part is probably not needed any more) Sort the list Get rid of any duplicates Escape dashes (I added this when trying to debug the problem—probably not needed) Trim all new lines Trim all leading and ending spaces The above command produces: $ zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' change configure create disable enable show $ So it looks to me like it's producing the exact same string as I had in my original script. But when I do: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions=`zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/'` COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf My autocompletion starts acting up. First, it won't autocomplete anything with an "n" in it, and second, when it does autocomplete ("zf create" for example) it won't let me backspace over my completed command. The first issue I'm completely stumped on. The second I'm thinking might have to do with escape characters from the colored text. Any ideas? It's driving me crazy!

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • Excluding child processes from ps

    - by stefpet
    Background: To reload app configuration I need to kill -HUP the parent processes' PIDs. To find PIDs I currently use ps auxf | grep gunicorn with the following example output: $ ps auxf | grep gunicorn stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4225 0.0 0.4 76920 16332 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4226 0.0 0.4 76932 16340 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4227 0.0 0.4 76940 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4228 0.0 0.4 76948 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4229 0.0 0.4 76960 16356 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4230 0.0 0.4 76972 16368 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4231 0.0 0.4 78856 18644 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4232 0.0 0.4 76992 16376 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5685 0.0 0.0 22076 908 pts/1 S+ 11:50 0:00 | \_ grep --color=auto gunicorn stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5021 0.0 0.4 77656 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5022 0.0 0.4 77664 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5023 0.0 0.4 77672 17164 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5024 0.0 0.4 77684 17196 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5025 0.0 0.4 77692 17200 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5026 0.0 0.4 77700 17208 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5027 0.0 0.4 77712 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5028 0.0 0.4 77720 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py Based on the above I see that it is 4222 and 5012 I need to HUP. Question: How can I exclude the child processes and only get the parent process (please note however that the processes I want do also have a parent (e.g. bash) that I'm uninterested with)? Using a regexp with grep on how much indentation there is in the ascii tree feels dirty. Is there a better way? Example: The desired output would be something like this. stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py This would be easily parseable to be able to automatically find the PIDs in a script that does the HUPing which is the goal.

    Read the article

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

  • How can I get the Terminal raster font to display alt codes in a text editor?

    - by grg-n-sox
    I am working on a project that includes making some ASCII art, except it isn't true ASCII art since I am using a far amount of Windows Alt codes to make it. Anyways, I wanted to make sure that as I am working on it, that it looks exactly how it will in a windows command prompt terminal session. So since command prompt defaults to the Terminal raster font, I figured I would use that. But I quickly noticed that when I use the Terminal typeface in a text editor, it will not render ASCII codes, either at all (as is the case most of the time) or incorrectly. Now, I understand if a font just doesn't support non-ASCII characters, but what I don't get is how the characters do show up correctly in command prompt when they don't in a text editor. I checked the output of the 'chcp' and it was set to 437 by default, which is what I need. Well, either that or 850 but preferably 437 since they got rid of some of the graphics in 437 and replaced them with other Latin characters. Command prompt terminal settings show I am using the Terminal raster font with a 8x12 glyph size. So I try using size 12 in the text editor but no good, even after switching the text encoding to either MS-DOS OEM-US (supposedly an alternative name for CP437) or UTF-8. I just don't get how I am not getting the characters to show up. Also, if it helps, the art I am making is basically modified screen shots from a game I play called Dwarf Fortress that uses characters from the Terminal/Curses typeset, or at least that is how it is reported in the forums by those who make graphics sets to replace the default character set. However, the game doesn't actually use the system's Terminal font. The game's data files includes a bitmap image that is a grid of all the characters the game uses. So it uses this bitmap to render graphics instead of the actual font file. And I basically want to get a text editor to make it so if I type up some ASCII art to look like a screenshot from Dwarf Fortress, that it will actually look like Dwarf Fortress other than the lack of color. Any help?

    Read the article

  • Missing startup screens and slow bootup/login after using WinClone to expand Bootcamp partition

    - by user26453
    I used WinClone to backup my Bootcamp partition, which was a Windows 7 Ultimate install, on my late 2006 Macbook Pro. I desired to expand the Bootcamp partition's size. It worked reasonably well with some hiccups along the way and some remaining issues. First issue I ran into was the Bootcamp Assistant utility - it would not recreate the partition. This was due to a lack of contiguous space that is required for the Bootcamp partition. As a result I wiped the whole drive and reinstalled Snow Leopard, did the minimum amount of system updates, and created and formated a new Bootcamp partition. WinClone restored the image without complaint and the image was automatically resized to the new partition's size. Second issue I ran into was after the first boot into Windows. The first thing I noticed was that instead of the newer "slick" startup screen (4 colors wisping around, a Windows 7 title), there was more of an old school style startup screen (a progress bar with block increments, yellow/greenish color, nothing else really). The initial bootup to a login screen was slow, perhaps as Windows dealt with the partition changes. After logging in, the screen goes blank and the computer seems to hang for a minute, before completing the login. After subsequent restarts, the slick screen is still missing, boot to login screen is normal, but the time from login to desktop active is still very slow. As a side note, this behavior of a long time from login to the desktop finally loaded I've previously only seen when the computer would try to hibernate and fail (battery is really bad). On the next startup, I would see this behavior, but not subsequently. So a potential cause: I imaged the partition after hibernating out of Windows. From reading some posts/guides on the subject, this was not recommended, and perhaps shouldn't even have worked? Could the partition be stuck in some weird mode as a result that makes the boot issues appear? I've attempted to disable hibernation and restart, trying to delete the .sys file that hibernation uses. Other fixes I'm thinking of attempting are booting a Win7 disc and repairing the install/partition. I can't shake the nagging feeling something isn't right as a result of the modified boot screens and the slow login process.

    Read the article

  • Script to check a shared Exchange calendar and then email detail

    - by SJN
    We're running Server and Exchange 2003 here. There's a shared calendar which HR keep up-to-date detailing staff who are on leave. I'm looking for a VB Script (or alternate) which will extract the "appointment" titles of each item for the current day and then email the detail to a mail group, in doing so notifying the group with regard to which staff are on leave for the day. The resulting email body should be: Staff on leave today: Mike Davis James Stead @Paul Robichaux - ADO is the way I went for this in the end, here are the key component for those interested: Dim Rs, Conn, Url, Username, Password, Recipient Set Rs = CreateObject("ADODB.Recordset") Set Conn = CreateObject("ADODB.Connection") 'Configurable variables Username = "Domain\username" ' AD domain\username Password = "password" ' AD password Url = "file://./backofficestorage/domain.com/MBX/username/Calendar" 'path to user's mailbox and folder Recipient = "[email protected]" Conn.Provider = "ExOLEDB.DataSource" Conn.Open Url, Username, Password Set Rs.ActiveConnection = Conn Rs.Source = "SELECT ""DAV:href"", " & _ " ""urn:schemas:httpmail:subject"", " & _ " ""urn:schemas:calendar:dtstart"", " & _ " ""urn:schemas:calendar:dtend"" " & _ "FROM scope('shallow traversal of """"') " Rs.Open Rs.MoveFirst strOutput = "" Do Until Rs.EOF If DateDiff("s", Rs.Fields("urn:schemas:calendar:dtstart"), date) >= 0 And DateDiff("s", Rs.Fields("urn:schemas:calendar:dtend"), date) < 0 Then strOutput = strOutput & "<p><font size='2' color='black' face='verdana'><b>" & Rs.Fields("urn:schemas:httpmail:subject") & "</b><br />" & vbCrLf strOutput = strOutput & "<b>From: </b>" & Rs.Fields("urn:schemas:calendar:dtstart") & vbCrLf strOutput = strOutput & "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<b>To: </b>" & Rs.Fields("urn:schemas:calendar:dtend") & "<br /><br />" & vbCrLf End If Rs.MoveNext Loop Conn.Close Set Conn = Nothing Set Rec = Nothing After that, you can do what you like with srtOutput, I happened to use CDO to send an email: Set objMessage = CreateObject("CDO.Message") objMessage.Subject = "Subject" objMessage.From = "[email protected]" objMessage.To = Recipient objMessage.HTMLBody = strOutput objMessage.Send S

    Read the article

  • Nagios plugin script not working as expected

    - by Linker3000
    I have modified an off-the-shelf Nagios plugin perl script to (in theory) return a one or zero according to the existence, or not, of a file on a remote linux server. The script runs a remote ssh session and logs in as the nagios user. The remote linux servers have private keys setup for that user, and on the bash command line the script works as expected, but when run as a plugin it always returns '1' (true) even if the file does not exist. Some help with the logic or a comment on why things are not working as expected within Nagios would be appreciated. I'd prefer to use this ssh login method rather than having to install nrpe on all the linux servers. To run from a command line (assuming remote server has a user called nagios with a valid private key): ./check_reboot_required -e ssh -H remote-servers-ip-addr -p 'filename-to-check' -v Ta. #! /usr/bin/perl -w # # # License Information: # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # ############################################################################ use POSIX; use strict; use Getopt::Long; use lib "/usr/lib/nagios/plugins" ; use vars qw($host $opt_V $opt_h $opt_v $verbose $PROGNAME $pattern $opt_p $mmin $opt_e $opt_t $opt_H $status $state $msg $msg_q $MAILQ $SHELL $device $used $avail $percent $fs $blocks $CMD $RMTOS); use utils qw(%ERRORS &print_revision &support &usage ); sub print_help (); sub print_usage (); sub process_arguments (); $ENV{'PATH'}=''; $ENV{'BASH_ENV'}=''; $ENV{'ENV'}=''; $PROGNAME = "check_reboot_required"; Getopt::Long::Configure('bundling'); $status = process_arguments(); if ($status){ print "ERROR: processing arguments\n"; exit $ERRORS{'UNKNOWN'}; } $SIG{'ALRM'} = sub { print ("ERROR: timed out waiting for $CMD on $host\n"); exit $ERRORS{'WARNING'}; }; $host = $opt_H; $pattern = $opt_p; print "Pattern >" . $pattern . "< " if $verbose; alarm($opt_t); #$CMD = "/usr/bin/find " . $pattern . " -type f 2>/dev/null| /usr/bin/wc -l"; $CMD = "[ -f " . $pattern . " ] && echo 1 || echo 0"; alarm($opt_t); ## get cmd output from remote system if (! open (OUTPUT, "$SHELL $host $CMD|" ) ) { print "ERROR: could not open $CMD on $host\n"; exit $ERRORS{'UNKNOWN'}; } my $perfdata = ""; my $state = "3"; my $msg = "Indeterminate result"; # only first line is relevant in this iteration. while (<OUTPUT>) { my $result = chomp($_); $msg = $result; print "Shell returned >" . $result . "< length is " . length($result) . " " if $verbose; if ( $result == 1 ) { $msg = "Reboot required (NB: Result still not accurate)" . $result ; $state = $ERRORS{'WARNING'}; last; } elsif ( $result == 0 ) { $msg = "No reboot required (NB: Result still not accurate) " . $result ; $state = $ERRORS{'OK'}; last; } else { $msg = "Output received, but it was neither a 1 nor a 0" ; last; } } close (OUTPUT); print "$msg | $perfdata\n"; exit $state; ##################################### #### subs sub process_arguments(){ GetOptions ("V" => \$opt_V, "version" => \$opt_V, "v" => \$opt_v, "verbose" => \$opt_v, "h" => \$opt_h, "help" => \$opt_h, "e=s" => \$opt_e, "shell=s" => \$opt_e, "p=s" => \$opt_p, "pattern=s" => \$opt_p, "t=i" => \$opt_t, "timeout=i" => \$opt_t, "H=s" => \$opt_H, "hostname=s" => \$opt_H ); if ($opt_V) { print_revision($PROGNAME,'$Revision: 1.0 $ '); exit $ERRORS{'OK'}; } if ($opt_h) { print_help(); exit $ERRORS{'OK'}; } if (defined $opt_v ){ $verbose = $opt_v; } if (defined $opt_e ){ if ( $opt_e eq "ssh" ) { if (-x "/usr/local/bin/ssh") { $SHELL = "/usr/local/bin/ssh"; } elsif ( -x "/usr/bin/ssh" ) { $SHELL = "/usr/bin/ssh"; } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } } elsif ( $opt_e eq "rsh" ) { $SHELL = "/usr/bin/rsh"; } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } unless (defined $opt_t) { $opt_t = $utils::TIMEOUT ; # default timeout } unless (defined $opt_H) { print_usage(); exit $ERRORS{'UNKNOWN'}; } return $ERRORS{'OK'}; } sub print_usage () { print "Usage: $PROGNAME -e <shell> -H <hostname> -p <directory/file pattern> [-t <timeout>] [-v verbose]\n"; } sub print_help () { print_revision($PROGNAME,'$Revision: 0.1 $'); print "\n"; print_usage(); print "\n"; print " Checks for the presence of a 'reboot-required' file on a remote host via SSH or RSH\n"; print "-e (--shell) = ssh or rsh (required)\n"; print "-H (--hostname) = remote server name (required)"; print "-p (--pattern) = File pattern for find command (default = /var/run/reboot-required)\n"; print "-t (--timeout) = Plugin timeout in seconds (default = $utils::TIMEOUT)\n"; print "-h (--help)\n"; print "-V (--version)\n"; print "-v (--verbose) = debugging output\n"; print "\n\n"; support(); }

    Read the article

  • Linux server is only using 60% of memory, then swapping

    - by Kamil Kisiel
    I've got a Linux server that's running our bacula backup system. The machine is grinding like mad because it's going heavy in to swap. The problem is, it's only using 60% of its physical memory! Here's the output from free -m: free -m total used free shared buffers cached Mem: 3949 2356 1593 0 0 1 -/+ buffers/cache: 2354 1595 Swap: 7629 1804 5824 and some sample output from vmstat 1: procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 2 1843536 1634512 0 4188 54 13 2524 666 2 1 1 1 89 9 0 1 11 1845916 1640724 0 388 2700 4816 221880 4879 14409 170721 4 3 63 30 0 0 9 1846096 1643952 0 0 4956 756 174832 804 12357 159306 3 4 63 30 0 0 11 1846104 1643532 0 0 4916 540 174320 580 10609 139960 3 4 64 29 0 0 4 1846084 1640272 0 2336 4080 524 140408 548 9331 118287 3 4 63 30 0 0 8 1846104 1642096 0 1488 2940 432 102516 457 7023 82230 2 4 65 29 0 0 5 1846104 1642268 0 1276 3704 452 126520 452 9494 119612 3 5 65 27 0 3 12 1846104 1641528 0 328 6092 608 187776 636 8269 113059 4 3 64 29 0 2 2 1846084 1640960 0 724 5948 0 111480 0 7751 116370 4 4 63 29 0 0 4 1846100 1641484 0 404 4144 1476 125760 1500 10668 105358 2 3 71 25 0 0 13 1846104 1641932 0 0 5872 828 153808 840 10518 128447 3 4 70 22 0 0 8 1846096 1639172 0 3164 3556 556 74884 580 5082 65362 2 2 73 23 0 1 4 1846080 1638676 0 396 4512 28 50928 44 2672 38277 2 2 80 16 0 0 3 1846080 1628808 0 7132 2636 0 28004 8 1358 14090 0 1 78 20 0 0 2 1844728 1618552 0 11140 7680 0 12740 8 763 2245 0 0 82 18 0 0 2 1837764 1532056 0 101504 2952 0 95644 24 802 3817 0 1 87 12 0 0 11 1842092 1633324 0 4416 1748 10900 143144 11024 6279 134442 3 3 70 24 0 2 6 1846104 1642756 0 0 4768 468 78752 468 4672 60141 2 2 76 20 0 1 12 1846104 1640792 0 236 4752 440 140712 464 7614 99593 3 5 58 34 0 0 3 1846084 1630368 0 6316 5104 0 20336 0 1703 22424 1 1 72 26 0 2 17 1846104 1638332 0 3168 4080 1720 211960 1744 11977 155886 3 4 65 28 0 1 10 1846104 1640800 0 132 4488 556 126016 584 8016 106368 3 4 63 29 0 0 14 1846104 1639740 0 2248 3436 428 114188 452 7030 92418 3 3 59 35 0 1 6 1846096 1639504 0 1932 5500 436 141412 460 8261 112210 4 4 63 29 0 0 10 1846104 1640164 0 3052 4028 448 147684 472 7366 109554 4 4 61 30 0 0 10 1846100 1641040 0 2332 4952 632 147452 664 8767 118384 3 4 63 30 0 4 8 1846084 1641092 0 664 4948 276 152264 292 6448 98813 5 5 62 28 0 Furthermore, the output of top sorted by CPU time seems to support the theory that swap is what's bogging down the system: top - 09:05:32 up 37 days, 23:24, 1 user, load average: 9.75, 8.24, 7.12 Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 1.4%sy, 0.0%ni, 76.1%id, 20.6%wa, 0.1%hi, 0.2%si, 0.0%st Mem: 4044632k total, 2405628k used, 1639004k free, 0k buffers Swap: 7812492k total, 1851852k used, 5960640k free, 436k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 4174 root 17 0 63156 176 56 S 8 0.0 2138:52 35,38 bacula-fd 4185 root 17 0 63352 284 104 S 6 0.0 1709:25 28,29 bacula-sd 240 root 15 0 0 0 0 D 3 0.0 831:55.19 831:55 kswapd0 2852 root 10 -5 0 0 0 S 1 0.0 126:35.59 126:35 xfsbufd 2849 root 10 -5 0 0 0 S 0 0.0 119:50.94 119:50 xfsbufd 1364 root 10 -5 0 0 0 S 0 0.0 117:05.39 117:05 xfsbufd 21 root 10 -5 0 0 0 S 1 0.0 48:03.44 48:03 events/3 6940 postgres 16 0 43596 8 8 S 0 0.0 46:50.35 46:50 postmaster 1342 root 10 -5 0 0 0 S 0 0.0 23:14.34 23:14 xfsdatad/4 5415 root 17 0 1770m 108 48 S 0 0.0 15:03.74 15:03 bacula-dir 23 root 10 -5 0 0 0 S 0 0.0 13:09.71 13:09 events/5 5604 root 17 0 1216m 500 200 S 0 0.0 12:38.20 12:38 java 5552 root 16 0 1194m 580 248 S 0 0.0 11:58.00 11:58 java Here's the same sorted by virtual memory image size: top - 09:08:32 up 37 days, 23:27, 1 user, load average: 8.43, 8.26, 7.32 Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie Cpu(s): 3.6%us, 3.4%sy, 0.0%ni, 62.2%id, 30.2%wa, 0.2%hi, 0.3%si, 0.0%st Mem: 4044632k total, 2404212k used, 1640420k free, 0k buffers Swap: 7812492k total, 1852548k used, 5959944k free, 100k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 5415 root 17 0 1770m 56 44 S 0 0.0 15:03.78 15:03 bacula-dir 5604 root 17 0 1216m 492 200 S 0 0.0 12:38.30 12:38 java 5552 root 16 0 1194m 476 200 S 0 0.0 11:58.20 11:58 java 4598 root 16 0 117m 44 44 S 0 0.0 0:13.37 0:13 eventmond 9614 gdm 16 0 93188 0 0 S 0 0.0 0:00.30 0:00 gdmgreeter 5527 root 17 0 78716 0 0 S 0 0.0 0:00.30 0:00 gdm 4185 root 17 0 63352 284 104 S 20 0.0 1709:52 28,29 bacula-sd 4174 root 17 0 63156 208 88 S 24 0.0 2139:25 35,39 bacula-fd 10849 postgres 18 0 54740 216 108 D 0 0.0 0:31.40 0:31 postmaster 6661 postgres 17 0 49432 0 0 S 0 0.0 0:03.50 0:03 postmaster 5507 root 15 0 47980 0 0 S 0 0.0 0:00.00 0:00 gdm 6940 postgres 16 0 43596 16 16 S 0 0.0 46:51.39 46:51 postmaster 5304 postgres 16 0 40580 132 88 S 0 0.0 6:21.79 6:21 postmaster 5301 postgres 17 0 40448 24 24 S 0 0.0 0:32.17 0:32 postmaster 11280 root 16 0 40288 28 28 S 0 0.0 0:00.11 0:00 sshd 5534 root 17 0 37580 0 0 S 0 0.0 0:56.18 0:56 X 30870 root 30 15 31668 28 28 S 0 0.0 1:13.38 1:13 snmpd 5305 postgres 17 0 30628 16 16 S 0 0.0 0:11.60 0:11 postmaster 27403 postfix 17 0 30248 0 0 S 0 0.0 0:02.76 0:02 qmgr 10815 postfix 15 0 30208 16 16 S 0 0.0 0:00.02 0:00 pickup 5306 postgres 16 0 29760 20 20 S 0 0.0 0:52.89 0:52 postmaster 5302 postgres 17 0 29628 64 32 S 0 0.0 1:00.64 1:00 postmaster I've tried tuning the swappiness kernel parameter to both high and low values, but nothing appears to change the behavior here. I'm at a loss to figure out what's going on. How can I find out what's causing this? Update: The system is a fully 64-bit system, so there should be no question of memory limitations due to 32-bit issues. Update2: As I mentioned in the original question, I've already tried tuning swappiness to all sorts of values, including 0. The result is always the same, with approximately 1.6 GB of memory remaining unused. Update3: Added top output to the above info.

    Read the article

  • Android - HorizontalScrollView within ScrollView Touch Handling

    - by Joel
    Hi, I have a ScrollView that surrounds my entire layout so that the entire screen is scrollable. The first element I have in this ScrollView is a HorizontalScrollView block that has features that can be scrolled through horizontally. I've added an ontouchlistener to the horizontalscrollview to handle touch events and force the view to "snap" to the closest image on the ACTION_UP event. So the effect I'm going for is like the stock android homescreen where you can scroll from one to the other and it snaps to one screen when you lift your finger. This all works great except for one problem: I need to swipe left to right almost perfectly horizontally for an ACTION_UP to ever register. If I swipe vertically in the very least (which I think many people tend to do on their phones when swiping side to side), I will receive an ACTION_CANCEL instead of an ACTION_UP. My theory is that this is because the horizontalscrollview is within a scrollview, and the scrollview is hijacking the vertical touch to allow for vertical scrolling. How can I disable the touch events for the scrollview from just within my horizontal scrollview, but still allow for normal vertical scrolling elsewhere in the scrollview? Here's a sample of my code: public class HomeFeatureLayout extends HorizontalScrollView { private ArrayList<ListItem> items = null; private GestureDetector gestureDetector; View.OnTouchListener gestureListener; private static final int SWIPE_MIN_DISTANCE = 5; private static final int SWIPE_THRESHOLD_VELOCITY = 300; private int activeFeature = 0; public HomeFeatureLayout(Context context, ArrayList<ListItem> items){ super(context); setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); setFadingEdgeLength(0); this.setHorizontalScrollBarEnabled(false); this.setVerticalScrollBarEnabled(false); LinearLayout internalWrapper = new LinearLayout(context); internalWrapper.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); internalWrapper.setOrientation(LinearLayout.HORIZONTAL); addView(internalWrapper); this.items = items; for(int i = 0; i< items.size();i++){ LinearLayout featureLayout = (LinearLayout) View.inflate(this.getContext(),R.layout.homefeature,null); TextView header = (TextView) featureLayout.findViewById(R.id.featureheader); ImageView image = (ImageView) featureLayout.findViewById(R.id.featureimage); TextView title = (TextView) featureLayout.findViewById(R.id.featuretitle); title.setTag(items.get(i).GetLinkURL()); TextView date = (TextView) featureLayout.findViewById(R.id.featuredate); header.setText("FEATURED"); Image cachedImage = new Image(this.getContext(), items.get(i).GetImageURL()); image.setImageDrawable(cachedImage.getImage()); title.setText(items.get(i).GetTitle()); date.setText(items.get(i).GetDate()); internalWrapper.addView(featureLayout); } gestureDetector = new GestureDetector(new MyGestureDetector()); setOnTouchListener(new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { if (gestureDetector.onTouchEvent(event)) { return true; } else if(event.getAction() == MotionEvent.ACTION_UP || event.getAction() == MotionEvent.ACTION_CANCEL ){ int scrollX = getScrollX(); int featureWidth = getMeasuredWidth(); activeFeature = ((scrollX + (featureWidth/2))/featureWidth); int scrollTo = activeFeature*featureWidth; smoothScrollTo(scrollTo, 0); return true; } else{ return false; } } }); } class MyGestureDetector extends SimpleOnGestureListener { @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { try { //right to left if(e1.getX() - e2.getX() > SWIPE_MIN_DISTANCE && Math.abs(velocityX) > SWIPE_THRESHOLD_VELOCITY) { activeFeature = (activeFeature < (items.size() - 1))? activeFeature + 1:items.size() -1; smoothScrollTo(activeFeature*getMeasuredWidth(), 0); return true; } //left to right else if (e2.getX() - e1.getX() > SWIPE_MIN_DISTANCE && Math.abs(velocityX) > SWIPE_THRESHOLD_VELOCITY) { activeFeature = (activeFeature > 0)? activeFeature - 1:0; smoothScrollTo(activeFeature*getMeasuredWidth(), 0); return true; } } catch (Exception e) { // nothing } return false; } } }

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • Getting started with Exchange Web Services 2010

    - by Adam Tuttle
    I've been tasked with writing a SOAP web-service in .Net to be middleware between EWS2010 and an application server that previously used WebDAV to connect to Exchange. (As I understand it, WebDAV is going away with EWS2010, so the application server will no longer be able to connect as it previously did, and it is exponentially harder to connect to EWS without WebDAV. The theory is that doing it in .Net should be easier than anything else... Right?!) My end goal is to be able to get and create/update email, calendar items, contacts, and to-do list items for a specified Exchange account. (Deleting is not currently necessary, but I may build it in for future consideration, if it's easy enough). I was originally given some sample code, which did in fact work, but I quickly realized that it was outdated. The types and classes used appear nowhere in the current documentation. For example, the method used to create a connection to the Exchange server was: ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); For what it's worth, this was using an assembly that came with the sample code: Microsoft.Exchange.WebServices.dll ("MEWS"). Before I realized that this wasn't the current standard way to accomplish the connection, and it worked, I tried to build on it and add a method to create calendar items, which I copied from here: static void CreateAppointment(ExchangeServiceBinding esb) { // Create the appointment. CalendarItemType appointment = new CalendarItemType(); ... } Right away, I'm confronted with the difference between ExchangeService and ExchangeServiceBinding ("ESB"); so I started Googling to try and figure out how to get an ESB definition so that the CreateAppointment method will compile. I found this blog post that explains how to generate a proxy class from a WSDL, which I did. Unfortunately, this caused some conflicts where types that were defined in the original Assembly, Microsoft.Exchange.WebServices.dll (that came with the sample code) overlapped with Types in my new EWS.dll assembly (which I compiled from the code generated from the services.wsdl provided by the Exchange server). I excluded the MEWS assembly, which only made things worse. I went from a handful of errors and warnings to 25 errors and 2,510 warnings. All kinds of types and methods were not found. Something is clearly wrong, here. So I went back on the hunt. I found instructions on adding service references and web references (i.e. the extra steps it takes in VS2008), and I think I'm back on the right track. I removed (actually, for now, just excluded) all previous assemblies I had been trying; and I added a service reference for https://my.exchange-server.com/ews/services.wsdl Now I'm down to just 1 error and 1 warning. Warning: The element 'transport' cannot contain child element 'extendedProtectionPolicy' because the parent element's content model is empty. This is in reference to a change that was made to web.config when I added the service reference; and I just found a fix for that here on SO. I've commented that section out as indicated, and it did make the warning go away, so woot for that. The error hasn't been so easy to get around, though: Error: The type or namespace name 'ExchangeService' could not be found (are you missing a using directive or an assembly reference?) This is in reference to the function I was using to create the EWS connection, called by each of the web methods: private ExchangeService getService(String AutoDiscoverEmailAddress, String AuthEmailAddress, String AuthEmailPassword) { ExchangeService svc = new ExchangeService(); svc.Credentials = new WebCredentials(AuthEmailAddress, AuthEmailPassword); svc.AutodiscoverUrl(AutoDiscoverEmailAddress); return svc; } This function worked perfectly with the MEWS assembly from the sample code, but the ExchangeService type is no longer available. (Nor is ExchangeServiceBinding, that was the first thing I checked.) At this point, since I'm not following any directions from the documentation (I couldn't find anywhere in the documentation that said to add a service reference to your Exchange server's services.wsdl -- but that does seem to be the best/farthest I've gotten so far), I feel like I'm flying blind. I know I need to figure out whatever it is that should replace ExchangeService / ExchangeServiceBinding, implement that, and then work through whatever errors crop up as a result of that switch... But I have no idea how to do that, or where to look for how to do it. Googling "ExchangeService" and "ExchangeServiceBinding" only seem to lead back to outdated blog posts and MSDN, neither of which has proven terribly helpful thus far. Help me, Obi-Wan, you're my only hope!

    Read the article

  • NHibernate: Collection was modified; enumeration operation may not execute

    - by Daoming Yang
    Hi All, I'm currently struggling with this "Collection was modified; enumeration operation may not execute" issue. I have searched about this error message, and it's all related to the foreach statement. I do have the some foreach statements, but they are just simply representing the data. I did not using any remove or add inside the foreach statement. NOTE: The error randomly happens (about 4-5 times a day). The application is the MVC website. There are about 5 users operate this applications (about 150 orders a day). Could it be some another users modified the collection, and then occur this error? I have log4net setup and the settings can be found here Make sure that the controller has a parameterless public constructor I do have parameterless public constructor in AdminProductController Does anyone know why this happen and how to resolve this issue? A friend (Oskar) mentioned that "Theory: Maybe the problem is that your configuration and session factory is initialized on the first request after application restart. If a second request comes in before the first request is finished, maybe it will also try to initialize and then triggering this problem somehow." Many thanks. Daoming Here is the error message: System.InvalidOperationException Collection was modified; enumeration operation may not execute. System.InvalidOperationException: An error occurred when trying to create a controller of type 'WebController.Controllers.Admin.AdminProductController'. Make sure that the controller has a parameterless public constructor. --- System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- NHibernate.MappingException: Could not configure datastore from input stream DomainModel.Entities.Mappings.OrderProductVariant.hbm.xml --- System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.Collections.ArrayList.ArrayListEnumeratorSimple.MoveNext() at System.Xml.Schema.XmlSchemaSet.AddSchemaToSet(XmlSchema schema) at System.Xml.Schema.XmlSchemaSet.Add(String targetNamespace, XmlSchema schema) at System.Xml.Schema.XmlSchemaSet.Add(XmlSchema schema) at NHibernate.Cfg.Configuration.LoadMappingDocument(XmlReader hbmReader, String name) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) --- End of inner exception stack trace --- at NHibernate.Cfg.Configuration.LogAndThrow(Exception exception) at NHibernate.Cfg.Configuration.AddInputStream(Stream xmlInputStream, String name) at NHibernate.Cfg.Configuration.AddResource(String path, Assembly assembly) at NHibernate.Cfg.Configuration.AddAssembly(Assembly assembly) at DomainModel.RepositoryBase..ctor() at WebController.Controllers._baseController..ctor() at WebController.Controllers.Admin.AdminProductController..ctor() at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) --- End of inner exception stack trace --- at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) --- End of inner exception stack trace --- at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) at System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) UPDATE CODE: In my Global.asax.cs, I'm doing this: protected void Application_BeginRequest(object sender, EventArgs e) { ManagedWebSessionContext.Bind(HttpContext.Current, SessionManager.SessionFactory.OpenSession()); } protected void Application_EndRequest(object sender, EventArgs e) { ISession session = ManagedWebSessionContext.Unbind(HttpContext.Current, SessionManager.SessionFactory); if (session != null) { try { if (session.Transaction != null && session.Transaction.IsActive) { session.Transaction.Rollback(); } else { session.Flush(); } } finally { session.Close(); } } } In the SessionManager class, I'm doing: public class SessionManager { private readonly ISessionFactory sessionFactory; public static ISessionFactory SessionFactory { get { return Instance.sessionFactory; } } private ISessionFactory GetSessionFactory() { return sessionFactory; } public static SessionManager Instance { get { return NestedSessionManager.sessionManager; } } public static ISession OpenSession() { return Instance.GetSessionFactory().OpenSession(); } public static ISession CurrentSession { get { return Instance.GetSessionFactory().GetCurrentSession(); } } private SessionManager() { Configuration config = new Configuration().Configure(); config.AddAssembly(Assembly.GetExecutingAssembly()); sessionFactory = config.BuildSessionFactory(); } class NestedSessionManager { internal static readonly SessionManager sessionManager = new SessionManager(); } } In the Repository, I'm doing this: public IEnumerable<User> GetAll() { ICriteria criteria = SessionManager.CurrentSession.CreateCriteria(typeof(User)); return criteria.List<User>(); } In the Controller, I'm doing this: public class UserController : _baseController { IUserRoleRepository _userRoleRepository; internal static readonly ILogger log = LogManager.GetLogger(typeof(UserController)); public UserController() { _userRoleRepository = new UserRoleRepository(); } public ActionResult UserList() { var myList = _usersRepository.GetAll(); return View(myList); } }

    Read the article

  • UITableView: Juxtaposing row, header, and footer insertions/deletions

    - by jdandrea
    Consider a very simple UITableView with one of two states. First state: One (overall) table footer One section containing two rows, a section header, and a section footer Second state: No table footer One section containing four rows and no section header/footer In both cases, each row is essentially one of four possible UITableViewCell objects, each containing its own UITextField. We don't even bother with reuse or caching, since we're only dealing with four known cells in this case. They've been created in an accompanying XIB, so we already have them all wired up and ready to go. Now consider we want to toggle between the two states. Sounds easy enough. Let's suppose our view controller's right bar button item provides the toggling support. We'll also track the current state with an ivar and enumeration. To be explicit for a sec, here's how one might go from state 1 to 2. (Presume we handle the bar button item's title as well.) In short, we want to clear out our table's footer view, then insert the third and fourth rows. We batch this inside an update block like so: // Brute forced references to the third and fourth rows in section 0 NSUInteger row02[] = {0, 2}; NSUInteger row03[] = {0, 3}; [self.tableView beginUpdates]; state = tableStateTwo; // 'internal' iVar, not a property self.tableView.tableFooterView = nil; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObjects: [NSIndexPath indexPathWithIndexes:row02 length:2], [NSIndexPath indexPathWithIndexes:row03 length:2], nil] withRowAnimation:UITableViewRowAnimationFade]; [self.tableView endUpdates]; For the reverse, we want to reassign the table footer view (which, like the cells, is in the XIB ready and waiting), and remove the last two rows: // Use row02 and row03 from earlier snippet [self.tableView beginUpdates]; state = tableStateOne; self.tableView.tableFooterView = theTableFooterView; [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObjects: [NSIndexPath indexPathWithIndexes:row02 length:2], [NSIndexPath indexPathWithIndexes:row03 length:2], nil] withRowAnimation:UITableViewRowAnimationFade]; [self.tableView endUpdates]; Now, when the table asks for rows, it's very cut and dry. The first two cells are the same in both cases. Only the last two appear/disappear depending on the state. The state ivar is consulted when the Table View asks for things like number of rows in a section, height for header/footer in a section, or view for header/footer in a section. This last bit is also where I'm running into trouble. Using the above logic, section 0's header/footer does not disappear. Specifically, the footer stays below the inserted rows, but the header now overlays the topmost row. If we switch back to state one, the section footer is removed, but the section header remains. How about using [self.tableView reloadData] then? Sure, why not. We take care not to use it inside the update block, per Apple's advisement, and simply add it after endUpdates. This time, good news! The section 0 header/footer disappears. :) However ... Toggling back to state one results in a most exquisite mess! The section 0 header returns, only to overlay the first row once again (instead of appear above it). The section 0 footer is placed below the last row just fine, but the overall table footer - now reinstated - overlays the section footer. Waaaaaah … now what? Just to be sure, let's toggle back to state two again. Yep, that looks fine. Coming back to state one? Yecccch. I also tried sprinkling in a few other stunts like using reloadSections:withRowAnimation:, but that only serves to make things worse. NSRange range = {0, 1}; NSIndexSet *indexSet = [NSIndexSet indexSetWithIndexesInRange:range]; ... [self.tableView reloadSections:indexSet withRowAnimation:UITableViewRowAnimationFade]; Case in point: If we invoke reloadSections... just before the end of the update block, changing to state two hides the first two rows from view, even though the space they would otherwise occupy remains. Switching back to state one returns section 0's header/footer to normal, but those first two rows remain invisible. Case two: Moving reloadSections... to just after the update block but before reloadData results in all rows becoming invisible! (I refer to the row as being invisible because, during tracing, tableView:cellForRowAtIndexPath: is returning bona-fide cell objects for those rows.) Case three: Moving reloadSections... after tableView:cellForRowAtIndexPath: brings us a bit closer, but the section 0 header/footer never returns when switching back to state one. Hmm. Perhaps it's a faux pas using both reloadSections... and reloadData, based on what I'm seeing at trace-time, which brings us to: Case four: Replacing reloadData with reloadSections... outright. All cells in state two disappear. All cells in state one remain missing as well (though the space is kept). So much for that theory. :) Tracing through the code, the cell and view objects, as well as the section heights, are all where they should be at the opportune times. They just aren't rendering sanely. So, how to crack this case? Clues welcome/appreciated!

    Read the article

  • Thread scheduling Round Robin / scheduling dispatch

    - by MRP
    #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <semaphore.h> #define NUM_THREADS 4 #define COUNT_LIMIT 13 int done = 0; int count = 0; int quantum = 2; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,7}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void * inc_count(void * arg); static sem_t count_sem; int quit = 0; ///////// Inc_Count//////////////// void *inc_count(void *t) { long my_id = (long)t; int i; sem_wait(&count_sem); /////////////CRIT SECTION////////////////////////////////// printf("run_thread = %d\n",my_id); printf("%d \n",thread_runtime[my_id]); for( i=0; i < thread_runtime[my_id];i++) { printf("runtime= %d\n",thread_runtime[my_id]); pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n",my_id, count); pthread_mutex_unlock(&count_mutex); sleep(1) ; }//End For sem_post(&count_sem); // Next Thread Enters Crit Section pthread_exit(NULL); } /////////// Count_Watch //////////////// void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } ////////////////// Main //////////////// int main (int argc, char *argv[]) { int i; long t1=0, t2=1, t3=2, t4=3; pthread_t threads[4]; pthread_attr_t attr; sem_init(&count_sem, 0, 1); /* Initialize mutex and condition variable objects */ pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); /* For portability, explicitly create threads in a joinable state */ pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); /* Wait for all threads to complete */ for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); /* Clean up and exit */ pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } I am trying to learn thread scheduling, there is a lot of technical coding that I don't know. I do know in theory how it should work, but having trouble getting started in code... I know, at least I think, this program is not real time and its not meant to be. Some how I need to create a scheduler dispatch to control the threads in the order they should run... RR FCFS SJF ect. Right now I don't have a dispatcher. What I do have is semaphores/ mutex to control the threads. This code does run FCFS... and I have been trying to use semaphores to create a RR.. but having a lot of trouble. I believe it would be easier to create a dispatcher but I dont know how. I need help, I am not looking for answers just direction.. some sample code will help to understand a bit more. Thank you.

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • java runtime 6 with socks v5 proxy - Possible?

    - by rwired
    I have written an application that (amongst other things) runs a local service in windows that acts as a SOCKS v5 proxy for Firefox. I'm in the debugging phase right now and have found certain websites that don't work correctly. For example the Java Applet for Picture Uploading on Facebook.com fails because is is unable to lookup domains. My app overrides a hidden FF config setting network.proxy.socks__remote__dns setting it to true. The whole purpose of the app is to allow access to websites when behind a firewall (e.g. if the user is in China), so this setting is essential to ensure domains are resolved remotely also (and not just HTTP requests). In the JRE6 settings (documented here) there isn't an equivalent setting, and since remote DNS resolution is a feature of SOCKS v5 and not v4 as the documentation seems to imply I'm worried that it's just not possible. How can I programmatically make sure the JRE uses a SOCKS v5 proxy for all requests (including DNS)? UPDATE: Steps to reproduce this problem: Make sure you are behind a firewall that blocks (or redirects) internet access including DNS Install PuTTY and add a dynamic SSH tunnel on some port number of your choice (e.g. 9870). Then login to a remote server that has full access to the internet Launch Firefox and you will not be able to browse the web In FF network settings set the SOCKS v5 proxy to localhost:9870 In FF go to about:config, change network.proxy.socks__remote__dns to true You will now be able to browse the web. Go to facebook.com, login, go to your profile and attempt to use the picture uploader java applet to add some pictures It will fail with a series of class not found errors looking similar to: load: class com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class not found. I believe this is failing because the JRE is unable to resolve the domain that the class resides on. I'm basing this belief on the fact that the documentation (http://java.sun.com/javase/6/docs/technotes/guides/deployment/deployment-guide/properties.html) talks only about SOCKS v4 (which as far as I know does not support remote DNS). My deployment.properties file is located in %APPDATA%\Sun\Java\Deployment. I can confirm that modifications I make in the Java Control Panel get written into that file. If instead of "Use browser setting" the network settings for Java I override and attempt to use the SOCKS proxy settings manually, I still have the issue. There does not seem to be an easy way to force the JRE to do DNS remotely through the Proxy. UPDATE 2: Without the SOCKS proxy, from my local client www.facebook.com resolves to 203.161.230.171 upload.facebook.com resolves to 64.33.88.161 Neither host is reachable (because of the firewall) If I login to the remote server, I get: www.facebook.com 69.63.187.17 upload.facebook.com 69.63.178.32 Both these IPs change after a few minutes, as it seems Facebook uses round-robin DNS and other load-balancing. With the Proxy settings set in Firefox, I can navigate to www.facebook.com without any difficulty (since DNS is being resolved remotely on the Proxy). Whey I go to the page with the Java applet it fails with the stacktrace messages I've already reported. However if I edit Windows\System32\drivers\etc\hosts, adding the correct IP for upload.facebook.com I can get the applet to load and work correctly (restart of FF is sometimes necessary). This evidence seems to support my theory that the Java Runtime is not resolving DNS on the Proxy, but instead just routing traffic though it. My application is for mass-deployment, and needs to work with java applets on other sites (not just facebook). I really need a work-around for this problem. UPDATE 3 Stacktrace dump a requested by ZZ Coder: load: class com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class not found. java.lang.ClassNotFoundException: com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at java.net.HttpURLConnection.getResponseCode(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Exception: java.lang.ClassNotFoundException: com.facebook.facebookphotouploader5.FacebookPhotoUploader5.class Dumping class loader cache... Live entry: key=http://upload.facebook.com/controls/2008.10.10_v5.5.8/,FacebookPhotoUploader5.jar,FacebookPhotoUploader5.jar, refCount=1, threadGroup=sun.plugin2.applet.Applet2ThreadGroup[name=http://upload.facebook.com/controls/2008.10.10_v5.5.8/-threadGroup,maxpri=4] Done.

    Read the article

  • Mixing inheritance mapping strategies in NHibernate

    - by MylesRip
    I have a rather large inheritance hierarchy in which some of the subclasses add very little and others add quite a bit. I don't want to map the entire hierarchy using either "table per class hierarchy" or "table per subclass" due to the size and complexity of the hierarchy. Ideally I'd like to mix mapping strategies such that portions of the hierarchy where the subclasses add very little are combined into a common table a la "table per class hierarchy" and subclasses that add a lot are broken out into a separate table. Using this approach, I would expect to have 2 or 3 tables with very little wasted space instead of either 1 table with lots of fields that don't apply to most of the objects, or 20+ tables, several of which would have only a couple of columns. In the NHibernate Reference Documentation version 2.1.0, I found section 8.1.4 "Mixing table per class hierarchy with table per subclass". This approach switches strategies partway down the hierarchy by using: ... <subclass ...> <join ...> <property ...> ... </join> </subclass> ... This is great in theory. In practice, though, I found that the schema was too restrictive in what was allowed inside the "join" element for me to be able to accomplish what I needed. Here is the related part of the schema definition: <xs:element name="join"> <xs:complexType> <xs:sequence> <xs:element ref="subselect" minOccurs="0" /> <xs:element ref="comment" minOccurs="0" /> <xs:element ref="key" /> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element ref="property" /> <xs:element ref="many-to-one" /> <xs:element ref="component" /> <xs:element ref="dynamic-component" /> <xs:element ref="any" /> <xs:element ref="map" /> <xs:element ref="set" /> <xs:element ref="list" /> <xs:element ref="bag" /> <xs:element ref="idbag" /> <xs:element ref="array" /> <xs:element ref="primitive-array" /> </xs:choice> <xs:element ref="sql-insert" minOccurs="0" /> <xs:element ref="sql-update" minOccurs="0" /> <xs:element ref="sql-delete" minOccurs="0" /> </xs:sequence> <xs:attribute name="table" use="required" type="xs:string" /> <xs:attribute name="schema" type="xs:string" /> <xs:attribute name="catalog" type="xs:string" /> <xs:attribute name="subselect" type="xs:string" /> <xs:attribute name="fetch" default="join"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="join" /> <xs:enumeration value="select" /> </xs:restriction> </xs:simpleType> </xs:attribute> <xs:attribute name="inverse" default="false" type="xs:boolean"> </xs:attribute> <xs:attribute name="optional" default="false" type="xs:boolean"> </xs:attribute> </xs:complexType> </xs:element> As you can see, this allows the use of "property" child elements or "component" child elements, but not both. It also doesn't allow for "subclass" child elements to continue the hierarchy below the point at which the strategy was changed. Is there a way to accomplish this?

    Read the article

  • How to Stich to Image objects in Java

    - by Imran
    Hi, I have a scenario in which i`m getting a number of tiles (e.g.12) from my mapping server. Now for buffering and offline functions I need to join them all back again so that we have to deal with 1 single image object instead of 12. I ve tried to do it without JAI my code is below. package imagemerge; import java.awt.*; import java.awt.image.*; import java.awt.event.*; public class ImageSticher extends WindowAdapter { Image tile1; Image tile2; Image result; ColorModel colorModel; int width,height,widthr,heightr; //int t1,t2; int t12[]; public ImageSticher() { } public ImageSticher (Image img1,Image img2,int w,int h) { tile1=img1; tile2=img2; width=w; height=h; colorModel=ColorModel.getRGBdefault(); } public Image horizontalStich() throws Exception { widthr=width*2; heightr=height; t12=new int[widthr * heightr]; int t1[]=new int[width*height]; PixelGrabber p1 =new PixelGrabber(tile1, 0, 0, width, height, t1, 0, width); p1.grabPixels(); int t2[]=new int[width*height]; PixelGrabber p2 =new PixelGrabber(tile2, 0, 0, width, height, t1, 0, width); p2.grabPixels(); int y, x, rp, rpi; int red1, red2, redr; int green1, green2, greenr; int blue1, blue2, bluer; int alpha1, alpha2, alphar; for(y=0;y<heightr;y++) { for(x=0;x<widthr;x++) { //System.out.println(x); rpi=y*widthr+x; // index of resulting pixel; rp=0; //initializing resulting pixel System.out.println(rpi); if(x<(widthr/2)) // x is less than width , copy first tile { //System.out.println("tile1="+x); blue1 = t1[rpi] & 0x00ff; // ERROR occurs here green1=(t1[rpi] >> 8) & 0x00ff; red1=(t1[rpi] >> 16) & 0x00ff; alpha1 = (t1[rpi] >> 24) & 0x00ff; redr = (int)(red1 * 1.0); // copying red band pixel into redresult,,,,1.0 is the alpha valye redr = (redr < 0)?(0):((redr>255)?(255):(redr)); greenr = (int)(green1 * 1.0); // redr = (int)(red1 * 1.0); // greenr = (greenr < 0)?(0):((greenr>255)?(255):(greenr)); bluer = (int)(blue1 * 1.0); bluer = (bluer < 0)?(0):((bluer>255)?(255):(bluer)); alphar = 255; //resulting pixel computed rp = (((((alphar << 8) + (redr & 0x0ff)) << 8) + (greenr & 0x0ff)) << 8) + (bluer & 0x0ff); } else // index is ahead of half way...copy second tile { blue2 = t2[rpi] & 0x00ff; // blue band bit of first tile green2=(t2[rpi] >> 8) & 0x00ff; red2=(t2[rpi] >> 16) & 0x00ff; alpha2 = (t2[rpi] >> 24) & 0x00ff; redr = (int)(red2 * 1.0); // copying red band pixel into redresult,,,,1.0 is the alpha valye redr = (redr < 0)?(0):((redr>255)?(255):(redr)); greenr = (int)(green2 * 1.0); // redr = (int)(red2 * 1.0); // greenr = (greenr < 0)?(0):((greenr>255)?(255):(greenr)); bluer = (int)(blue2 * 1.0); bluer = (bluer < 0)?(0):((bluer>255)?(255):(bluer)); alphar = 255; //resulting pixel computed rp = (((((alphar << 8) + (redr & 0x0ff)) << 8) + (greenr & 0x0ff)) << 8) + (bluer & 0x0ff); } t12[rpi] = rp; // copying resulting pixel in the result int array which will be converted to image } } MemoryImageSource mis; if (t12!=null) { mis = new MemoryImageSource(widthr, heightr, colorModel, t12, 0, widthr); result = Toolkit.getDefaultToolkit().createImage(mis); return result; } return null; } } now to check the my theory Im trying to join or stich two tiles horizontaly but im getting the error : java.lang.ArrayIndexOutOfBoundsException: 90000 at imagemerge.ImageSticher.horizontalStich(ImageSticher.java:69) at imageStream.ImageStream.getImage(ImageStream.java:75) at imageStream.ImageStream.main(ImageStream.java:28) is there some kind of limitation because when stiching two images of 300 x 300 horizontally it means the resulting image will be 600 x 300 ... that would make 180000 index size but its giving error at 90000, what am I doing wrong here

    Read the article

  • Neural Network Always Produces Same/Similar Outputs for Any Input

    - by l33tnerd
    I have a problem where I am trying to create a neural network for Tic-Tac-Toe. However, for some reason, training the neural network causes it to produce nearly the same output for any given input. I did take a look at Artificial neural networks benchmark, but my network implementation is built for neurons with the same activation function for each neuron, i.e. no constant neurons. To make sure the problem wasn't just due to my choice of training set (1218 board states and moves generated by a genetic algorithm), I tried to train the network to reproduce XOR. The logistic activation function was used. Instead of using the derivative, I multiplied the error by output*(1-output) as some sources suggested that this was equivalent to using the derivative. I can put the Haskell source on HPaste, but it's a little embarrassing to look at. The network has 3 layers: the first layer has 2 inputs and 4 outputs, the second has 4 inputs and 1 output, and the third has 1 output. Increasing to 4 neurons in the second layer didn't help, and neither did increasing to 8 outputs in the first layer. I then calculated errors, network output, bias updates, and the weight updates by hand based on http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf to make sure there wasn't an error in those parts of the code (there wasn't, but I will probably do it again just to make sure). Because I am using batch training, I did not multiply by x in equation (4) there. I am adding the weight change, though http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-2.html suggests to subtract it instead. The problem persisted, even in this simplified network. For example, these are the results after 500 epochs of batch training and of incremental training. Input |Target|Output (Batch) |Output(Incremental) [1.0,1.0]|[0.0] |[0.5003781562785173]|[0.5009731800870864] [1.0,0.0]|[1.0] |[0.5003740346965251]|[0.5006347214672715] [0.0,1.0]|[1.0] |[0.5003734471544522]|[0.500589332376345] [0.0,0.0]|[0.0] |[0.5003674110937019]|[0.500095157458231] Subtracting instead of adding produces the same problem, except everything is 0.99 something instead of 0.50 something. 5000 epochs produces the same result, except the batch-trained network returns exactly 0.5 for each case. (Heck, even 10,000 epochs didn't work for batch training.) Is there anything in general that could produce this behavior? Also, I looked at the intermediate errors for incremental training, and the although the inputs of the hidden/input layers varied, the error for the output neuron was always +/-0.12. For batch training, the errors were increasing, but extremely slowly and the errors were all extremely small (x10^-7). Different initial random weights and biases made no difference, either. Note that this is a school project, so hints/guides would be more helpful. Although reinventing the wheel and making my own network (in a language I don't know well!) was a horrible idea, I felt it would be more appropriate for a school project (so I know what's going on...in theory, at least. There doesn't seem to be a computer science teacher at my school). EDIT: Two layers, an input layer of 2 inputs to 8 outputs, and an output layer of 8 inputs to 1 output, produces much the same results: 0.5+/-0.2 (or so) for each training case. I'm also playing around with pyBrain, seeing if any network structure there will work. Edit 2: I am using a learning rate of 0.1. Sorry for forgetting about that. Edit 3: Pybrain's "trainUntilConvergence" doesn't get me a fully trained network, either, but 20000 epochs does, with 16 neurons in the hidden layer. 10000 epochs and 4 neurons, not so much, but close. So, in Haskell, with the input layer having 2 inputs & 2 outputs, hidden layer with 2 inputs and 8 outputs, and output layer with 8 inputs and 1 output...I get the same problem with 10000 epochs. And with 20000 epochs. Edit 4: I ran the network by hand again based on the MIT PDF above, and the values match, so the code should be correct unless I am misunderstanding those equations. Some of my source code is at http://hpaste.org/42453/neural_network__not_working; I'm working on cleaning my code somewhat and putting it in a Github (rather than a private Bitbucket) repository. All of the relevant source code is now at https://github.com/l33tnerd/hsann.

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • How to implement Cocoa copyWithZone on derived object in MonoMac C#?

    - by Justin Aquadro
    I'm currently porting a small Winforms-based .NET application to use a native Mac front-end with MonoMac. The application has a TreeControl with icons and text, which does not exist out of the box in Cocoa. So far, I've ported almost all of the ImageAndTextCell code in Apple's DragNDrop example: https://developer.apple.com/library/mac/#samplecode/DragNDropOutlineView/Listings/ImageAndTextCell_m.html#//apple_ref/doc/uid/DTS40008831-ImageAndTextCell_m-DontLinkElementID_6, which is assigned to an NSOutlineView as a custom cell. It seems to be working almost perfectly, except that I have not figured out how to properly port the copyWithZone method. Unfortunately, this means the internal copies that NSOutlineView is making do not have the image field, and it leads to the images briefly vanishing during expand and collapse operations. The objective-c code in question is: - (id)copyWithZone:(NSZone *)zone { ImageAndTextCell *cell = (ImageAndTextCell *)[super copyWithZone:zone]; // The image ivar will be directly copied; we need to retain or copy it. cell->image = [image retain]; return cell; } The first line is what's tripping me up, as MonoMac does not expose a copyWithZone method, and I don't know how to otherwise call it. Update Based on current answers and additional research and testing, I've come up with a variety of models for copying an object. static List<ImageAndTextCell> _refPool = new List<ImageAndTextCell>(); // Method 1 static IntPtr selRetain = Selector.GetHandle ("retain"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell() { Title = Title, Image = Image, }; Messaging.void_objc_msgSend (cell.Handle, selRetain); return cell; } // Method 2 [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell() { Title = Title, Image = Image, }; _refPool.Add(cell); return cell; } [Export("dealloc")] public void Dealloc () { _refPool.Remove(this); this.Dispose(); } // Method 3 static IntPtr selRetain = Selector.GetHandle ("retain"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell() { Title = Title, Image = Image, }; _refPool.Add(cell); Messaging.void_objc_msgSend (cell.Handle, selRetain); return cell; } // Method 4 static IntPtr selRetain = Selector.GetHandle ("retain"); static IntPtr selRetainCount = Selector.GetHandle("retainCount"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone (IntPtr zone) { ImageAndTextCell cell = new ImageAndTextCell () { Title = Title, Image = Image, }; _refPool.Add (cell); Messaging.void_objc_msgSend (cell.Handle, selRetain); return cell; } public void PeriodicCleanup () { List<ImageAndTextCell> markedForDelete = new List<ImageAndTextCell> (); foreach (ImageAndTextCell cell in _refPool) { uint count = Messaging.UInt32_objc_msgSend (cell.Handle, selRetainCount); if (count == 1) markedForDelete.Add (cell); } foreach (ImageAndTextCell cell in markedForDelete) { _refPool.Remove (cell); cell.Dispose (); } } // Method 5 static IntPtr selCopyWithZone = Selector.GetHandle("copyWithZone:"); [Export("copyWithZone:")] public virtual NSObject CopyWithZone(IntPtr zone) { IntPtr copyHandle = Messaging.IntPtr_objc_msgSendSuper_IntPtr(SuperHandle, selCopyWithZone, zone); ImageAndTextCell cell = new ImageAndTextCell(copyHandle) { Image = Image, }; _refPool.Add(cell); return cell; } Method 1: Increases the retain count of the unmanaged object. The unmanaged object will persist persist forever (I think? dealloc never called), and the managed object will be harvested early. Seems to be lose-lose all-around, but runs in practice. Method 2: Saves a reference of the managed object. The unmanaged object is left alone, and dealloc appears to be invoked at a reasonable time by the caller. At this point the managed object is released and disposed. This seems reasonable, but on the downside the base type's dealloc won't be run (I think?) Method 3: Increases the retain count and saves a reference. Unmanaged and managed objects leak forever. Method 4: Extends Method 3 by adding a cleanup function that is run periodically (e.g. during Init of each new ImageAndTextCell object). The cleanup function checks the retain counts of the stored objects. A retain count of 1 means the caller has released it, so we should as well. Should eliminate leaking in theory. Method 5: Attempt to invoke the copyWithZone method on the base type, and then construct a new ImageAndTextView object with the resulting handle. Seems to do the right thing (the base data is cloned). Internally, NSObject bumps the retain count on objects constructed like this, so we also use the PeriodicCleanup function to release these objects when they're no longer used. Based on the above, I believe Method 5 is the best approach since it should be the only one that results in a truly correct copy of the base type data, but I don't know if the approach is inherently dangerous (I am also making some assumptions about the underlying implementation of NSObject). So far nothing bad has happened "yet", but if anyone is able to vet my analysis then I would be more confident going forward.

    Read the article

  • Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object

    - by Az
    Hi there, I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects. This is language-agnostic pseudocode from Wikipedia: s ? s0; e ? E(s) // Initial state, energy. sbest ? s; ebest ? e // Initial "best" solution k ? 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: snew ? neighbour(s) // Pick some neighbour. enew ? E(snew) // Compute its energy. if enew < ebest then // Is this a new best? sbest ? snew; ebest ? enew // Save 'new neighbour' to 'best found'. if P(e, enew, temp(k/kmax)) > random() then // Should we move to it? s ? snew; e ? enew // Yes, change state. k ? k + 1 // One more evaluation done return sbest // Return the best solution found. The following is an adaptation of the technique. My supervisor said the idea is fine in theory. First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below. The function does the following: Let this allocation, aOld be the best_node From all the students, pick a random number of students and stick in a list Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated) Randomise that list Try assigning (REALLOCATE) everyone in that list projects again Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101) For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue. If wOld < wNew, then apply the algorithm to aOld again and continue. The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict) Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node. So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA. Questions: I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me. Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means? I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one? An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means? A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

    Read the article

  • SetWindowHookEx and execution blocking

    - by Kalaz
    Hello, I just wonder... I mainly use .NET but now I started to investigate WINAPI calls. For example I am using this piece of code to hook to the API functions. It starts freezing, when I try to debug the application... using System; using System.Diagnostics; using System.Runtime.InteropServices; using System.Threading; using System.Windows.Forms; public class Keyboard { private const int WH_KEYBOARD_LL = 13; private const int WM_KEYDOWN = 0x0100; private static LowLevelKeyboardProc _proc = HookCallback; private static IntPtr _hookID = IntPtr.Zero; public static event Action<Keys,bool, bool> KeyDown; public static void Hook() { new Thread(new ThreadStart(()=> { _hookID = SetHook(_proc); Application.Run(); })).Start(); } public static void Unhook() { UnhookWindowsHookEx(_hookID); } private static IntPtr SetHook(LowLevelKeyboardProc proc) { using (Process curProcess = Process.GetCurrentProcess()) using (ProcessModule curModule = curProcess.MainModule) { return SetWindowsHookEx(WH_KEYBOARD_LL, proc, GetModuleHandle(curModule.ModuleName), 0); } } private delegate IntPtr LowLevelKeyboardProc( int nCode, IntPtr wParam, IntPtr lParam); private static IntPtr HookCallback( int nCode, IntPtr wParam, IntPtr lParam) { if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN) { int vkCode = Marshal.ReadInt32(lParam); Keys k = (Keys) vkCode; if (KeyDown != null) { KeyDown.BeginInvoke(k, IsKeyPressed(VirtualKeyStates.VK_CONTROL), IsKeyPressed(VirtualKeyStates.VK_SHIFT),null,null); } } return CallNextHookEx(_hookID, nCode, wParam, lParam); } private static bool IsKeyPressed(VirtualKeyStates virtualKeyStates) { return (GetKeyState(virtualKeyStates) & (1 << 7))==128; } [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr SetWindowsHookEx(int idHook, LowLevelKeyboardProc lpfn, IntPtr hMod, uint dwThreadId); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool UnhookWindowsHookEx(IntPtr hhk); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr CallNextHookEx(IntPtr hhk, int nCode, IntPtr wParam, IntPtr lParam); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr GetModuleHandle(string lpModuleName); [DllImport("user32.dll")] static extern short GetKeyState(VirtualKeyStates nVirtKey); } enum VirtualKeyStates : int { VK_LBUTTON = 0x01, VK_RBUTTON = 0x02, VK_CANCEL = 0x03, VK_MBUTTON = 0x04, // VK_XBUTTON1 = 0x05, VK_XBUTTON2 = 0x06, // VK_BACK = 0x08, VK_TAB = 0x09, // VK_CLEAR = 0x0C, VK_RETURN = 0x0D, // VK_SHIFT = 0x10, VK_CONTROL = 0x11, VK_MENU = 0x12, VK_PAUSE = 0x13, VK_CAPITAL = 0x14, // VK_KANA = 0x15, VK_HANGEUL = 0x15, /* old name - should be here for compatibility */ VK_HANGUL = 0x15, VK_JUNJA = 0x17, VK_FINAL = 0x18, VK_HANJA = 0x19, VK_KANJI = 0x19, // VK_ESCAPE = 0x1B, // VK_CONVERT = 0x1C, VK_NONCONVERT = 0x1D, VK_ACCEPT = 0x1E, VK_MODECHANGE = 0x1F, // VK_SPACE = 0x20, VK_PRIOR = 0x21, VK_NEXT = 0x22, VK_END = 0x23, VK_HOME = 0x24, VK_LEFT = 0x25, VK_UP = 0x26, VK_RIGHT = 0x27, VK_DOWN = 0x28, VK_SELECT = 0x29, VK_PRINT = 0x2A, VK_EXECUTE = 0x2B, VK_SNAPSHOT = 0x2C, VK_INSERT = 0x2D, VK_DELETE = 0x2E, VK_HELP = 0x2F, // VK_LWIN = 0x5B, VK_RWIN = 0x5C, VK_APPS = 0x5D, // VK_SLEEP = 0x5F, // VK_NUMPAD0 = 0x60, VK_NUMPAD1 = 0x61, VK_NUMPAD2 = 0x62, VK_NUMPAD3 = 0x63, VK_NUMPAD4 = 0x64, VK_NUMPAD5 = 0x65, VK_NUMPAD6 = 0x66, VK_NUMPAD7 = 0x67, VK_NUMPAD8 = 0x68, VK_NUMPAD9 = 0x69, VK_MULTIPLY = 0x6A, VK_ADD = 0x6B, VK_SEPARATOR = 0x6C, VK_SUBTRACT = 0x6D, VK_DECIMAL = 0x6E, VK_DIVIDE = 0x6F, VK_F1 = 0x70, VK_F2 = 0x71, VK_F3 = 0x72, VK_F4 = 0x73, VK_F5 = 0x74, VK_F6 = 0x75, VK_F7 = 0x76, VK_F8 = 0x77, VK_F9 = 0x78, VK_F10 = 0x79, VK_F11 = 0x7A, VK_F12 = 0x7B, VK_F13 = 0x7C, VK_F14 = 0x7D, VK_F15 = 0x7E, VK_F16 = 0x7F, VK_F17 = 0x80, VK_F18 = 0x81, VK_F19 = 0x82, VK_F20 = 0x83, VK_F21 = 0x84, VK_F22 = 0x85, VK_F23 = 0x86, VK_F24 = 0x87, // VK_NUMLOCK = 0x90, VK_SCROLL = 0x91, // VK_OEM_NEC_EQUAL = 0x92, // '=' key on numpad // VK_OEM_FJ_JISHO = 0x92, // 'Dictionary' key VK_OEM_FJ_MASSHOU = 0x93, // 'Unregister word' key VK_OEM_FJ_TOUROKU = 0x94, // 'Register word' key VK_OEM_FJ_LOYA = 0x95, // 'Left OYAYUBI' key VK_OEM_FJ_ROYA = 0x96, // 'Right OYAYUBI' key // VK_LSHIFT = 0xA0, VK_RSHIFT = 0xA1, VK_LCONTROL = 0xA2, VK_RCONTROL = 0xA3, VK_LMENU = 0xA4, VK_RMENU = 0xA5, // VK_BROWSER_BACK = 0xA6, VK_BROWSER_FORWARD = 0xA7, VK_BROWSER_REFRESH = 0xA8, VK_BROWSER_STOP = 0xA9, VK_BROWSER_SEARCH = 0xAA, VK_BROWSER_FAVORITES = 0xAB, VK_BROWSER_HOME = 0xAC, // VK_VOLUME_MUTE = 0xAD, VK_VOLUME_DOWN = 0xAE, VK_VOLUME_UP = 0xAF, VK_MEDIA_NEXT_TRACK = 0xB0, VK_MEDIA_PREV_TRACK = 0xB1, VK_MEDIA_STOP = 0xB2, VK_MEDIA_PLAY_PAUSE = 0xB3, VK_LAUNCH_MAIL = 0xB4, VK_LAUNCH_MEDIA_SELECT = 0xB5, VK_LAUNCH_APP1 = 0xB6, VK_LAUNCH_APP2 = 0xB7, // VK_OEM_1 = 0xBA, // ';:' for US VK_OEM_PLUS = 0xBB, // '+' any country VK_OEM_COMMA = 0xBC, // ',' any country VK_OEM_MINUS = 0xBD, // '-' any country VK_OEM_PERIOD = 0xBE, // '.' any country VK_OEM_2 = 0xBF, // '/?' for US VK_OEM_3 = 0xC0, // '`~' for US // VK_OEM_4 = 0xDB, // '[{' for US VK_OEM_5 = 0xDC, // '\|' for US VK_OEM_6 = 0xDD, // ']}' for US VK_OEM_7 = 0xDE, // ''"' for US VK_OEM_8 = 0xDF, // VK_OEM_AX = 0xE1, // 'AX' key on Japanese AX kbd VK_OEM_102 = 0xE2, // "<>" or "\|" on RT 102-key kbd. VK_ICO_HELP = 0xE3, // Help key on ICO VK_ICO_00 = 0xE4, // 00 key on ICO // VK_PROCESSKEY = 0xE5, // VK_ICO_CLEAR = 0xE6, // VK_PACKET = 0xE7, // VK_OEM_RESET = 0xE9, VK_OEM_JUMP = 0xEA, VK_OEM_PA1 = 0xEB, VK_OEM_PA2 = 0xEC, VK_OEM_PA3 = 0xED, VK_OEM_WSCTRL = 0xEE, VK_OEM_CUSEL = 0xEF, VK_OEM_ATTN = 0xF0, VK_OEM_FINISH = 0xF1, VK_OEM_COPY = 0xF2, VK_OEM_AUTO = 0xF3, VK_OEM_ENLW = 0xF4, VK_OEM_BACKTAB = 0xF5, // VK_ATTN = 0xF6, VK_CRSEL = 0xF7, VK_EXSEL = 0xF8, VK_EREOF = 0xF9, VK_PLAY = 0xFA, VK_ZOOM = 0xFB, VK_NONAME = 0xFC, VK_PA1 = 0xFD, VK_OEM_CLEAR = 0xFE } It works well even if you put messagebox into the event or something that blocks execution. But it gets bad if you try to put breakpoint into the event. Why? I mean event is not run in the same thread that the windows hook is. That means that It shouldn't block HookCallback. It does however... I would really like to know why is this happening. My theory is that Visual Studio when breaking execution temporarily stops all threads and that means that HookCallback is blocked... Is there any book or valuable resource that would explain concepts behind all of this threading?

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >