Search Results

Search found 9706 results on 389 pages for 'aggregate functions'.

Page 124/389 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Conditional checks against a list

    - by AnnaSexyChick
    I was wondering how computers do this. The most logical way I can think is that they are iterating trough all elements of the list until they find one that matches the condition :) For example if you call function_exists(), PHP should iterate trough all defined functions until it meets the one that matches the name you're looking for. Is this true that this is the only way? If it is, it sounds like it's not very efficient :s

    Read the article

  • Handling SQL Server Errors

    This article covers the basics of TRY CATCH error handling in T-SQL introduced in SQL Server 2005. It includes the usage of common functions to return information about the error and using the TRY CATCH block in stored procedures and transactions.

    Read the article

  • Outsource SEO - A Strong Business Case

    Outsourcing became quite popular in the 1990's as companies raced to reduce costs by moving non-essential functions out of the corporate cost structure. One of the main methods for doing this was to outsource. The basic business case to move any function to a subcontract was quite simple. Subcontractors that focus only on one thing have probably developed a deeper technical understanding of the process and are more effective. Economies of scale allow the outsourcer to provide the same (or higher quality) service at a lower price.

    Read the article

  • Integrating WordPress Into PHP Scripts

    Learn how to integrate WordPress header and footer functions directly into your PHP scripts. Often you have an existing PHP library that you want to use inside a WordPress installation. The tips in this article will show you clearly how to solve this problem.

    Read the article

  • Website Usability - 4 Tips

    You want your website to be as functional and visitor friendly as possible. After all, it's there for them to use for their benefit, so they need to be able to get around it and use its functions as easily and effectively as possible.

    Read the article

  • Overflow exception while performing parallel factorization using the .NET Task Parallel Library (TPL

    - by Aviad P.
    Hello, I'm trying to write a not so smart factorization program and trying to do it in parallel using TPL. However, after about 15 minutes of running on a core 2 duo machine, I am getting an aggregate exception with an overflow exception inside it. All the entries in the stack trace are part of the .NET framework, the overflow does not come from my code. Any help would be appreciated in figuring out why this happens. Here's the commented code, hopefully it's simple enough to understand: class Program { static List<Tuple<BigInteger, int>> factors = new List<Tuple<BigInteger, int>>(); static void Main(string[] args) { BigInteger theNumber = BigInteger.Parse( "653872562986528347561038675107510176501827650178351386656875178" + "568165317809518359617865178659815012571026531984659218451608845" + "719856107834513527"); Stopwatch sw = new Stopwatch(); bool isComposite = false; sw.Start(); do { /* Print out the number we are currently working on. */ Console.WriteLine(theNumber); /* Find a factor, stop when at least one is found (using the Any operator). */ isComposite = Range(theNumber) .AsParallel() .Any(x => CheckAndStoreFactor(theNumber, x)); /* Of the factors found, take the one with the lowest base. */ var factor = factors.OrderBy(x => x.Item1).First(); Console.WriteLine(factor); /* Divide the number by the factor. */ theNumber = BigInteger.Divide( theNumber, BigInteger.Pow(factor.Item1, factor.Item2)); /* Clear the discovered factors cache, and keep looking. */ factors.Clear(); } while (isComposite); sw.Stop(); Console.WriteLine(isComposite + " " + sw.Elapsed); } static IEnumerable<BigInteger> Range(BigInteger squareOfTarget) { BigInteger two = BigInteger.Parse("2"); BigInteger element = BigInteger.Parse("3"); while (element * element < squareOfTarget) { yield return element; element = BigInteger.Add(element, two); } } static bool CheckAndStoreFactor(BigInteger candidate, BigInteger factor) { BigInteger remainder, dividend = candidate; int exponent = 0; do { dividend = BigInteger.DivRem(dividend, factor, out remainder); if (remainder.IsZero) { exponent++; } } while (remainder.IsZero); if (exponent > 0) { lock (factors) { factors.Add(Tuple.Create(factor, exponent)); } } return exponent > 0; } } Here's the exception thrown: Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.OverflowException: Arithmetic operation resulted in an overflow. at System.Linq.Parallel.PartitionedDataSource`1.ContiguousChunkLazyEnumerator.MoveNext(T& currentElement, Int32& currentKey) at System.Linq.Parallel.AnyAllSearchOperator`1.AnyAllSearchOperatorEnumerator`1.MoveNext(Boolean& currentElement, Int32& currentKey) at System.Linq.Parallel.StopAndGoSpoolingTask`2.SpoolingWork() at System.Linq.Parallel.SpoolingTaskBase.Work() at System.Linq.Parallel.QueryTask.BaseWork(Object unused) at System.Linq.Parallel.QueryTask.<.cctor>b__0(Object o) at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.Execute() --- End of inner exception stack trace --- at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) at System.Linq.Parallel.SpoolingTask.SpoolStopAndGo[TInputOutput,TIgnoreKey](QueryTaskGroupState groupState, PartitionedStream`2 partitions, SynchronousChannel`1[] channels, TaskScheduler taskScheduler) at System.Linq.Parallel.DefaultMergeHelper`2.System.Linq.Parallel.IMergeHelper<TInputOutput>.Execute() at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) at System.Linq.Parallel.AnyAllSearchOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, BooleanpreferStriping, QuerySettings settings) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() at System.Linq.Parallel.AnyAllSearchOperator`1.Aggregate() at System.Linq.ParallelEnumerable.Any[TSource](ParallelQuery`1 source, Func`2 predicate) at PFact.Program.Main(String[] args) in d:\myprojects\PFact\PFact\Program.cs:line 34 Any help would be appreciated. Thanks!

    Read the article

  • How to implement an offline reader writer lock

    - by Peter Morris
    Some context for the question All objects in this question are persistent. All requests will be from a Silverlight client talking to an app server via a binary protocol (Hessian) and not WCF. Each user will have a session key (not an ASP.NET session) which will be a string, integer, or GUID (undecided so far). Some objects might take a long time to edit (30 or more minutes) so we have decided to use pessimistic offline locking. Pessimistic because having to reconcile conflicts would be far too annoying for users, offline because the client is not permanently connected to the server. Rather than storing session/object locking information in the object itself I have decided that any aggregate root that may have its instances locked should implement an interface ILockable public interface ILockable { Guid LockID { get; } } This LockID will be the identity of a "Lock" object which holds the information of which session is locking it. Now, if this were simple pessimistic locking I'd be able to achieve this very simply (using an incrementing version number on Lock to identify update conflicts), but what I actually need is ReaderWriter pessimistic offline locking. The reason is that some parts of the application will perform actions that read these complex structures. These include things like Reading a single structure to clone it. Reading multiple structures in order to create a binary file to "publish" the data to an external source. Read locks will be held for a very short period of time, typically less than a second, although in some circumstances they could be held for about 5 seconds at a guess. Write locks will mostly be held for a long time as they are mostly held by humans. There is a high probability of two users trying to edit the same aggregate at the same time, and a high probability of many users needing to temporarily read-lock at the same time too. I'm looking for suggestions as to how I might implement this. One additional point to make is that if I want to place a write lock and there are some read locks, I would like to "queue" the write lock so that no new read locks are placed. If the read locks are removed withing X seconds then the write lock is obtained, if not then the write lock backs off; no new read-locks would be placed while a write lock is queued. So far I have this idea The Lock object will have a version number (int) so I can detect multi-update conflicts, reload, try again. It will have a string[] for read locks A string to hold the session ID that has a write lock A string to hold the queued write lock Possibly a recursion counter to allow the same session to lock multiple times (for both read and write locks), but not sure about this yet. Rules: Can't place a read lock if there is a write lock or queued write lock. Can't place a write lock if there is a write lock or queued write lock. If there are no locks at all then a write lock may be placed. If there are read locks then a write lock will be queued instead of a full write lock placed. (If after X time the read locks are not gone the lock backs off, otherwise it is upgraded). Can't queue a write lock for a session that has a read lock. Can anyone see any problems? Suggest alternatives? Anything? I'd appreciate feedback before deciding on what approach to take.

    Read the article

  • WebSeal and jsp content updated by Ajax

    - by lior chaga
    Hey, I have a problem running an application on environment with WebSeal. It is a web application with Java server that contains many parts that are replcaed within the page according to user input. For instance - a form called Outer.jsp may contain a form:options combo-box (by spring-forms), that uppon selection of an option, a certain Div is updated with a content produced by a jsp and fetched by an Ajax call (the ajax impementation in the client is done by Prototype JavaScript framework 1.5.1.2). Let's call the content fetched by ajax - Inner.jsp So Outer.jsp is fetching Inner.jsp, which in turn uses js functions in files included by the Outer.jsp. This, I think, is where my problem starts - Inner.jsp is not familiar with any of the functions included in Outer.jsp. And so, almost any operation performed by Inner.jsp is failing miserably. Needless to say - this works perfect when running on environment without WebSeal. Note that the scripting is enabled in WebSeal junction (with the -J option). I also see that the content returned by the Ajax call includes a document.cookie added by WebSeal (not sure it matters to this problem) Can anyone assist? Thanks! Lior

    Read the article

  • SQL Server 2008 cluster freezing

    - by Ed Leighton-Dick
    We have run into a strange situation in which a SQL Server 2008 single-node cluster hangs. As background, we are rebuilding a Windows Server 2003/SQL Server 2005 two-node cluster using Windows 2008 and SQL Server 2008. Here's the timeline: Evicted the passive node (server B) from the Windows 2003/SQL 2005 cluster. The active node now functions as a single-node cluster with no problems. Wiped server B's disks and installed Windows 2008 and SQL Server 2008 as a single-node cluster. Since we do not want to the two clusters to communicate yet, we left the cluster's private network "heartbeat" adapter unconfigured. The cluster comes up and functions normally. Moved all databases to the new cluster. Cluster continues to function normally. Turned off server A (old cluster) in preparation for rebuilding as the second node of the new cluster. SQL Server instance on server B (new cluster) locks up, even though it should have no knowledge of or interaction with server A. Restarted server A. SQL Server instance on server B (new cluster) immediately begins working again. Things we have tried: The new cluster's name responds to ping and NETBIOS requests, even while the SQL Server is hung. We have confirmed that no IP address is assigned to the old heartbeat adapter, and it is not pulling an IP address from DHCP. Disabling the heartbeat's network card has the same effect. No errors were generated in any logs - Windows or SQL. When the error first occurred, it sat in the hung state for quite some time (well over 10 minutes) before anyone figured out what was going on. This would seem to eliminate any sort of normal cluster timeout in which it would have been searching for the other node (even if one had been configured). Server B is running Windows 2008 SP2, fully patched, and SQL Server 2008 SP1 CU7 (10.0.2775).

    Read the article

  • Selective Pointer device remapping in linux

    - by user6368
    I just got an HP 2710p (hp tablet, with digitizer), and I've played around with linux for a while now, and thought I would go ahead and install it. Everything works fine, excepting normal tablet functions, which is to be expected. I'm working on the screen rotation, and there are on-screen keyboards, etc, but I'm having issues with the stylus. I can tap and left click with the stylus as normal, but the side button (which in windows functions as a right mouse button) appears as a 'button 2' to xev (a middle/scroll wheel button). I can switch 'button 2' and 'button 3' universally using xmodmap, but I'd like to do so exclusively for stylus so I don't screw up regular pointing devices. Altering xorg.conf (which is surprisingly bare) with the recommended sections (adding sections for each of the stylus buttons) does nothing. I'm running crunchbang, which is an ubuntu/debian varient with openbox as the windows manager. Thanks Also, as a seperate note, does anybody know how to detect when I rotate and/or latch the lid shut? I was thinking maybe I could run a script to switch the buttons when I close it, but I can't find any information.

    Read the article

  • How flexible is the 'indirect' function?

    - by Chuck
    My curiosity pushes me to ask this question. If I were to have a series of functions that referenced a different column in a worksheet but all ended on the same row of data is there a way to point the 'row' part of a cell reference to a blank cell and use it has a variable to show the results of the functions up to a desired row simultaneously? Example: =Average('worksheet 1'.$A$1:'worksheet 1'.$A100) =Max('worksheet 1'.$B$1:'worksheet 1'.$B100) =Min('worksheet 1'.$C$1:'worksheet 1'.$C100) =Sum('worksheet 1'.$D$1:'worksheet 1'.$D100) Pseudo formulas... =Average('worksheet 1'.$A$1:'worksheet 1'.$A*('worksheet 2'.$A$1)*) =Max('worksheet 1'.$B$1:'worksheet 1'.$B*('worksheet 2'.$A$1)*) =Min('worksheet 1'.$C$1:'worksheet 1'.$C*('worksheet 2'.$A$1)*) =Sum('worksheet 1'.$D$1:'worksheet 1'.$D*('worksheet 2'.$A$1)*) Where 'worksheet 2'.$A$1 would only contain a number corresponding to a row in 'worksheet 1'. After stumbling upon and playing with the indirect() function I have only been able to replace the entire cell reference (Column and Row) with any success. The formula so far =SUM('worksheet 1'.C3:INDIRECT(A1)) Where A1 is on 'worksheet 2' and contains a full cell reference pointing to 'worksheet 1'. Any pointers?

    Read the article

  • sudoer scheme for another web developer that retains my future control of a virtual server?

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done.

    Read the article

  • Does Guest WiFi on an Access Point make any sense?

    - by uos??
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • puma init.d for centos 6 fails with runuser: user /var/log/puma.log does not exist

    - by Rubytastic
    Trying to get a init.d/puma to work on Centos 6. It throws error runuser: user /var/log/puma.log does not exist I run this from the /srv/books/current folder but it fails. I tried to debug the values but not quite get what is missing and why it throws this error. #! /bin/sh # puma - this script starts and stops the puma daemon # # chkconfig: - 85 15 # description: Puma # processname: puma # config: /etc/puma.conf # pidfile: /srv/books/current/tmp/pids/puma.pid # Author: Darío Javier Cravero &lt;[email protected]> # # Do NOT "set -e" # Original script https://github.com/puma/puma/blob/master/tools/jungle/puma # It was modified here by Stanislaw Pankevich <[email protected]> # to run on CentOS 5.5 boxes. # Script works perfectly on CentOS 5: script uses its native daemon(). # Puma is being stopped/restarted by sending signals, control app is not used. # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/usr/local/bin:/usr/local/sbin/:/sbin:/usr/sbin:/bin:/usr/bin DESC="Puma rack web server" NAME=puma DAEMON=$NAME SCRIPTNAME=/etc/init.d/$NAME CONFIG=/etc/puma.conf JUNGLE=`cat $CONFIG` RUNPUMA=/usr/local/bin/run-puma # Skipping the following non-CentOS string # Load the VERBOSE setting and other rcS variables # . /lib/init/vars.sh # CentOS does not have these functions natively log_daemon_msg() { echo "$@"; } log_end_msg() { [ $1 -eq 0 ] && RES=OK; logger ${RES:=FAIL}; } # Define LSB log_* functions. # Depend on lsb-base (>= 3.0-6) to ensure that this file is present. . /lib/lsb/init-functions # # Function that performs a clean up of puma.* files # cleanup() { echo "Cleaning up puma temporary files..." echo $1; PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state SOCKFILE=$1/tmp/puma/puma.sock rm -f $PIDFILE $STATEFILE $SOCKFILE } # # Function that starts the jungle # do_start() { log_daemon_msg "=> Running the jungle..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/puma/config.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/puma/puma.log" fi do_start_one $dir $user $config_file $log_file done } do_start_one() { PIDFILE=$1/puma/puma.pid if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` # If the puma isn't running, run it, otherwise restart it. if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then do_start_one_do $1 $2 $3 $4 else do_restart_one $1 fi else do_start_one_do $1 $2 $3 $4 fi } do_start_one_do() { log_daemon_msg "--> Woke up puma $1" log_daemon_msg "user $2" log_daemon_msg "log to $4" cleanup $1; daemon --user $2 $RUNPUMA $1 $3 $4 } # # Function that stops the jungle # do_stop() { log_daemon_msg "=> Putting all the beasts to bed..." for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_stop_one $dir done } # # Function that stops the daemon/service # do_stop_one() { log_daemon_msg "--> Stopping $1" PIDFILE=$1/tmp/puma/puma.pid STATEFILE=$1/tmp/puma/puma.state echo $PIDFILE if [ -e $PIDFILE ]; then PID=`cat $PIDFILE` echo "Pid:" echo $PID if [ "`ps -A -o pid= | grep -c $PID`" -eq 0 ]; then log_daemon_msg "---> Puma $1 isn't running." else log_daemon_msg "---> About to kill PID `cat $PIDFILE`" # pumactl --state $STATEFILE stop # Many daemons don't delete their pidfiles when they exit. kill -9 $PID fi cleanup $1 else log_daemon_msg "---> No puma here..." fi return 0 } # # Function that restarts the jungle # do_restart() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_restart_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_restart_one() { PIDFILE=$1/tmp/puma/puma.pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to restart puma $1" # pumactl --state $dir/tmp/puma/state restart kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> Your puma was never playing... Let's get it out there first" user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi return 0 } # # Function that statuss then jungle # do_status() { for i in $JUNGLE; do dir=`echo $i | cut -d , -f 1` do_status_one $dir done } # # Function that sends a SIGUSR2 to the daemon/service # do_status_one() { PIDFILE=$1/tmp/puma/pid i=`grep $1 $CONFIG` dir=`echo $i | cut -d , -f 1` if [ -e $PIDFILE ]; then log_daemon_msg "--> About to status puma $1" pumactl --state $dir/tmp/puma/state stats # kill -s USR2 `cat $PIDFILE` # TODO Check if process exist else log_daemon_msg "--> $1 isn't there :(..." fi return 0 } do_add() { str="" # App's directory if [ -d "$1" ]; then if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then str=$1 else echo "The app is already being managed. Remove it if you want to update its config." exit 1 fi else echo "The directory $1 doesn't exist." exit 1 fi # User to run it as if [ "`grep -c "^$2:" /etc/passwd`" -eq 0 ]; then echo "The user $2 doesn't exist." exit 1 else str="$str,$2" fi # Config file if [ "$3" != "" ]; then if [ -e $3 ]; then str="$str,$3" else echo "The config file $3 doesn't exist." exit 1 fi fi # Log file if [ "$4" != "" ]; then str="$str,$4" fi # Add it to the jungle echo $str >> $CONFIG log_daemon_msg "Added a Puma to the jungle: $str. You still have to start it though." } do_remove() { if [ "`grep -c "^$1" $CONFIG`" -eq 0 ]; then echo "There's no app $1 to remove." else # Stop it first. do_stop_one $1 # Remove it from the config. sed -i "\\:^$1:d" $CONFIG log_daemon_msg "Removed a Puma from the jungle: $1." fi } case "$1" in start) [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_start else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` user=`echo $i | cut -d , -f 2` config_file=`echo $i | cut -d , -f 3` if [ "$config_file" = "" ]; then config_file="$dir/config/puma.rb" fi log_file=`echo $i | cut -d , -f 4` if [ "$log_file" = "" ]; then log_file="$dir/log/puma.log" fi do_start_one $dir $user $config_file $log_file fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_stop else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_stop_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) # TODO Implement. log_daemon_msg "Status $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_status else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_status_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; restart) log_daemon_msg "Restarting $DESC" "$NAME" if [ "$#" -eq 1 ]; then do_restart else i=`grep $2 $CONFIG` dir=`echo $i | cut -d , -f 1` do_restart_one $dir fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; add) if [ "$#" -lt 3 ]; then echo "Please, specifiy the app's directory and the user that will run it at least." echo " Usage: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." exit 1 else do_add $2 $3 $4 $5 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; remove) if [ "$#" -lt 2 ]; then echo "Please, specifiy the app's directory to remove." exit 1 else do_remove $2 fi case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; *) echo "Usage:" >&2 echo " Run the jungle: $SCRIPTNAME {start|stop|status|restart}" >&2 echo " Add a Puma: $SCRIPTNAME add /path/to/app user /path/to/app/config/puma.rb /path/to/app/config/log/puma.log" echo " config and log are optionals." echo " Remove a Puma: $SCRIPTNAME remove /path/to/app" echo " On a Puma: $SCRIPTNAME {start|stop|status|restart} PUMA-NAME" >&2 exit 3 ;; esac :

    Read the article

  • How can I change the default location/action of 'Open Outlook Data File' in Outlook 2010?

    - by Chadddada
    I have recently deployed a Remote Desktop Host server that functions as a remote Microsoft Office 2010 work space for users. In part of the locking down of this server I have installed all programs on the D: drive and, through the use of Group Policy, hidden all the drives on the server from standard users. In addition to hiding these drives I am not allowing users to save anything locally (on the server) or open Libraries. However one of the functions of the server is to provide the Outlook client. Often users will have the .PST file stored on a network location and want to open this in Outlook. Can I change the default action or location that File Open Open Outlook Data File looks or tries to pull the file from? The default location seems to be under Users / Libraries. When click 'Open' you get a warning: This operation has been cancelled due to restrictions in effect on this computer. Clicking OK drops the user into a small menu that shows attached network drives under Computer. Can I instead have the 'Open' click drop the users in a defined network drive or just open computer and allow them to select a share? I don't want them to see the error message. A solution that looks to have been used for Office 2000/03 is: Key: HKEY_CURRENT_USER\Software\Microsoft\Office\<version>\Outlook Value name: ForceOSTPath Value type: REG_EXPAND_SZ Value: path to your storage folder I am not sure if there is a better way to do this now OR if this even works with Office 2010.

    Read the article

  • Does Guest WiFi on an Access Point make any sense? [migrated]

    - by Jason
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • umask seems to vary by user

    - by paullb
    I've got a development Ubuntu system for which I have several users: myself (with full sudo) and about 5 other users. (I've set up the system so everything in this respect is still at its default setting) I'm trying to set the system up so that multiple people can collaborate in a single directory by using grouing and I want the default permissions to be 664. However when some users edit files the permissions were 644. After a lot of investigating most users have a umask (checked at the prompt) of 0002 and when they create files they are 664 (as expected) but there are 2 (myself and one other) who have 0022 umask (so the files that come out are 644 and nobody else can write to them). I've looked everywhere but can't figure out why a couple users wind up with a different umask e.g. there is nothing the .bash_profile or anything like that) Any ideas for the source of the discrepancy? /etc/bashrc if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi /etc/profile if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi EDIT: My (bad) ~/.bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions export LANG=en_US.utf8 Other user (good) .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • How can I automate or script daily downloads for any new anti- virus databases, and then have the program scan my drive?

    - by Macgrimm
    Howdy all Super Users" I humbly ask if any Super User can direct this long time, gray haired Apple Tech in the right direction on this issue. I believe there probably are many ways to skin this cat. But I am looking to find simply the best, most unattended way to get it done. Any help will be greatly appreciated. also (I know there are much better softwares out there for the Mac so please don't go there! The politics of this company dictate which Anti virus we have to use) anyway without any further wait: basically I am trying to automate 2 very important functions of Mc'Afee anti-virus for Mac. First I want to automate the process of retrieving new virus definition files, and second I want to automate the process of scanning for viruses. It turns out that Using Mc'Afee Anti-Virus for the Mac are both manual functions. And they left up to the user (per user account) to perform. Depending on all of about 150 MAc users to perform these 2 tasks themselves is around 65% compliance. My question then is: If I wanted to use the command line such as (open /Applications/McAfee\ Security.app) It will open up the Security Console. But how can I make command Mc'Afee go out and grab the definition files and scan the computer? I have to admit I am at a crossroad and Macaltimers has set in. I would really appreciate it if any of you "Super ~ Users" can help me out with this MacAltimers loss of how to what to do. Thanks to All up Front Macgrimm

    Read the article

  • Receiving and processing SMS messages through a script?

    - by ShankarG
    I am attempting to setup a system to receive and process SMS messages automatically. The system is intended for use in a context (an unfunded migrant workers' union in India) where both finances and sysadmin skills are extremely constrained (I would be the only person, in the near future, who would be administering the system). The intention is to make some functions - registration of members, generation of ID cards, communication of alerts and other information - easier. However, for receiving and sending SMS, I have not been able to find any email to SMS or other kind of gateway that functions in India. Perhaps there is one (edit: apparently Clickatell does have an India service, but the prices appear astronomical). If not, can one rely on a USB mobile modem (such as those provided by many mobile providers in India)? It seems like, with utilities such as gammu or bitpim, SMS operations on such a modem could be scripted. Is this actually feasible, though? Thanks in advance for your thoughts and suggestions. edit: Original first question removed since the two questions had little to do with each other. The original first question has been asked separately here

    Read the article

  • file doesn't open, running outside of debugger results in seg fault (c++)

    - by misterich
    Hello (and thanks in advance) I'm in a bit of a quandry, I cant seem to figure out why I'm seg faulting. A couple of notes: It's for a course -- and sadly I am required to use use C-strings instead of std::string. Please dont fix my code (I wont learn that way and I will keep bugging you). please just point out the flaws in my logic and suggest a different function/way. platform: gcc version 4.4.1 on Suse Linux 11.2 (2.6.31 kernel) Here's the code main.cpp: // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <cstdlib> /// EXIT_SUCCESS, EXIT_FAILURE #include <iostream> /// cin, cout, ifstream #include <cassert> /// assert // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header for the dictionary class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR CONSTANTS #define ENTER '\n' /// Used to accept new lines, quit program. #define SPACE ' ' /// One way to end the program // /////////////////////////////////////////////////////////////////////////////////// // CUSTOM DATA TYPES /// File Namespace -- keep it local namespace { /// Possible program prompts to display for the user. enum FNS_Prompts { fileName_, /// prints out the name of the file noFile_, /// no file was passed to the program tooMany_, /// more than one file was passed to the program noMemory_, /// Not enough memory to use the program usage_, /// how to use the program word_, /// ask the user to define a word. notFound_, /// the word is not in the dictionary done_, /// the program is closing normally }; } // /////////////////////////////////////////////////////////////////////////////////// // Namespace using namespace std; /// Nothing special in the way of namespaces // /////////////////////////////////////////////////////////////////////////////////// // FUNCTIONS /** prompt() prompts the user to do something, uses enum Prompts for parameter. */ void prompt(FNS_Prompts msg /** determines the prompt to use*/) { switch(msg) { case fileName_ : { cout << ENTER << ENTER << "The file name is: "; break; } case noFile_ : { cout << ENTER << ENTER << "...Sorry, a dictionary file is needed. Try again." << endl; break; } case tooMany_ : { cout << ENTER << ENTER << "...Sorry, you can only specify one dictionary file. Try again." << endl; break; } case noMemory_ : { cout << ENTER << ENTER << "...Sorry, there isn't enough memory available to run this program." << endl; break; } case usage_ : { cout << "USAGE:" << endl << " lookup.exe [dictionary file name]" << endl << endl; break; } case done_ : { cout << ENTER << ENTER << "like Master P says, \"Word.\"" << ENTER << endl; break; } case word_ : { cout << ENTER << ENTER << "Enter a word in the dictionary to get it's definition." << ENTER << "Enter \"?\" to get a sorted list of all words in the dictionary." << ENTER << "... Press the Enter key to quit the program: "; break; } case notFound_ : { cout << ENTER << ENTER << "...Sorry, that word is not in the dictionary." << endl; break; } default : { cout << ENTER << ENTER << "something passed an invalid enum to prompt(). " << endl; assert(false); /// something passed in an invalid enum } } } /** useDictionary() uses the dictionary created by createDictionary * - prompts user to lookup a word * - ends when the user enters an empty word */ void useDictionary(Dictionary &d) { char *userEntry = new char; /// user's input on the command line if( !userEntry ) // check the pointer to the heap { cout << ENTER << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } do { prompt(word_); // test code cout << endl << "----------------------------------------" << endl << "Enter something: "; cin.getline(userEntry, INPUT_LINE_MAX_LEN, ENTER); cout << ENTER << userEntry << endl; }while ( userEntry[0] != NIL && userEntry[0] != SPACE ); // GARBAGE COLLECTION delete[] userEntry; } /** Program Entry * Reads in the required, single file from the command prompt. * - If there is no file, state such and error out. * - If there is more than one file, state such and error out. * - If there is a single file: * - Create the database object * - Populate the database object * - Prompt the user for entry * main() will return EXIT_SUCCESS upon termination. */ int main(int argc, /// the number of files being passed into the program char *argv[] /// pointer to the filename being passed into tthe program ) { // EXECUTE /* Testing code * / char tempFile[INPUT_LINE_MAX_LEN] = {NIL}; cout << "enter filename: "; cin.getline(tempFile, INPUT_LINE_MAX_LEN, '\n'); */ // uncomment after successful debugging if(argc <= 1) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// no file was passed to the program } else if(argc > 2) { prompt(tooMany_); prompt(usage_); return EXIT_FAILURE; /// more than one file was passed to the program } else { prompt(fileName_); cout << argv[1]; // print out name of dictionary file if( !argv[1] ) { prompt(noFile_); prompt(usage_); return EXIT_FAILURE; /// file does not exist } /* file.open( argv[1] ); // open file numEntries >> in.getline(file); // determine number of dictionary objects to create file.close(); // close file Dictionary[ numEntries ](argv[1]); // create the dictionary object */ // TEMPORARY FILE FOR TESTING!!!! //Dictionary scrabble(tempFile); Dictionary scrabble(argv[1]); // creaate the dicitonary object //*/ useDictionary(scrabble); // prompt the user, use the dictionary } // exit return EXIT_SUCCESS; /// terminate program. } Dict.h/.cpp #ifndef DICT_H #define DICT_H // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (Custom header files) #include "entry.h" /// class for dictionary entries // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define INPUT_LINE_MAX_LEN 256 /// Maximum length of each line in the dictionary file class Dictionary { public : // // Do NOT modify the public section of this class // typedef void (*WordDefFunc)(const char *word, const char *definition); Dictionary( const char *filename ); ~Dictionary(); const char *lookupDefinition( const char *word ); void forEach( WordDefFunc func ); private : // // You get to provide the private members // // VARIABLES int m_numEntries; /// stores the number of entries in the dictionary Entry *m_DictEntry_ptr; /// points to an array of class Entry // Private Functions }; #endif ----------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C/C++ Std Library) #include <iostream> /// cout, getline #include <fstream> // ifstream #include <cstring> /// strchr // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "dict.h" /// Header file required by assignment //#include "entry.h" /// Dicitonary Entry Class // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define COMMA ',' /// Delimiter for file #define ENTER '\n' /// Carriage return character #define FILE_ERR_MSG "The data file could not be opened. Program will now terminate." #pragma warning(disable : 4996) /// turn off MS compiler warning about strcpy() // /////////////////////////////////////////////////////////////////////////////////// // Namespace reference using namespace std; // /////////////////////////////////////////////////////////////////////////////////// // PRIVATE MEMBER FUNCTIONS /** * Sorts the dictionary entries. */ /* static void sortDictionary(?) { // sort through the words using qsort } */ /** NO LONGER NEEDED?? * parses out the length of the first cell in a delimited cell * / int getWordLength(char *str /// string of data to parse ) { return strcspn(str, COMMA); } */ // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC MEMBER FUNCTIONS /** constructor for the class * - opens/reads in file * - creates initializes the array of member vars * - creates pointers to entry objects * - stores pointers to entry objects in member var * - ? sort now or later? */ Dictionary::Dictionary( const char *filename ) { // Create a filestream, open the file to be read in ifstream dataFile(filename, ios::in ); /* if( dataFile.fail() ) { cout << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } */ if( dataFile.is_open() ) { // read first line of data // TEST CODE in.getline(dataFile, INPUT_LINE_MAX_LEN) >> m_numEntries; // TEST CODE char temp[INPUT_LINE_MAX_LEN] = {NIL}; // TEST CODE dataFile.getline(temp,INPUT_LINE_MAX_LEN,'\n'); dataFile >> m_numEntries; /** Number of terms in the dictionary file * \todo find out how many lines in the file, subtract one, ingore first line */ //create the array of entries m_DictEntry_ptr = new Entry[m_numEntries]; // check for valid memory allocation if( !m_DictEntry_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // loop thru each line of the file, parsing words/def's and populating entry objects for(int EntryIdx = 0; EntryIdx < m_numEntries; ++EntryIdx) { // VARIABLES char *tempW_ptr; /// points to a temporary word char *tempD_ptr; /// points to a temporary def char *w_ptr; /// points to the word in the Entry object char *d_ptr; /// points to the definition in the Entry int tempWLen; /// length of the temp word string int tempDLen; /// length of the temp def string char tempLine[INPUT_LINE_MAX_LEN] = {NIL}; /// stores a single line from the file // EXECUTE // getline(dataFile, tempLine) // get a "word,def" line from the file dataFile.getline(tempLine, INPUT_LINE_MAX_LEN); // get a "word,def" line from the file // Parse the string tempW_ptr = tempLine; // point the temp word pointer at the first char in the line tempD_ptr = strchr(tempLine, COMMA); // point the def pointer at the comma *tempD_ptr = NIL; // replace the comma with a NIL ++tempD_ptr; // increment the temp def pointer // find the string lengths... +1 to account for terminator tempWLen = strlen(tempW_ptr) + 1; tempDLen = strlen(tempD_ptr) + 1; // Allocate heap memory for the term and defnition w_ptr = new char[ tempWLen ]; d_ptr = new char[ tempDLen ]; // check memory allocation if( !w_ptr && !d_ptr ) { cout << MEM_ERR_MSG << endl; exit(EXIT_FAILURE); } // copy the temp word, def into the newly allocated memory and terminate the strings strcpy(w_ptr,tempW_ptr); w_ptr[tempWLen] = NIL; strcpy(d_ptr,tempD_ptr); d_ptr[tempDLen] = NIL; // set the pointers for the entry objects m_DictEntry_ptr[ EntryIdx ].setWordPtr(w_ptr); m_DictEntry_ptr[ EntryIdx ].setDefPtr(d_ptr); } // close the file dataFile.close(); } else { cout << ENTER << FILE_ERR_MSG << endl; exit(EXIT_FAILURE); } } /** * cleans up dynamic memory */ Dictionary::~Dictionary() { delete[] m_DictEntry_ptr; /// thou shalt not have memory leaks. } /** * Looks up definition */ /* const char *lookupDefinition( const char *word ) { // print out the word ---- definition } */ /** * prints out the entire dictionary in sorted order */ /* void forEach( WordDefFunc func ) { // to sort before or now.... that is the question } */ Entry.h/cpp #ifndef ENTRY_H #define ENTRY_H // /////////////////////////////////////////////////////////////////////////////////// // INCLUDES (C++ Std lib) #include <cstdlib> /// EXIT_SUCCESS, NULL // /////////////////////////////////////////////////////////////////////////////////// // PRE-PROCESSOR MACROS #define NIL '\0' /// C-String terminator #define MEM_ERR_MSG "Memory allocation has failed. Program will now terminate." // /////////////////////////////////////////////////////////////////////////////////// // CLASS DEFINITION class Entry { public: Entry(void) : m_word_ptr(NULL), m_def_ptr(NULL) { /* default constructor */ }; void setWordPtr(char *w_ptr); /// sets the pointer to the word - only if the pointer is empty void setDefPtr(char *d_ptr); /// sets the ponter to the definition - only if the pointer is empty /// returns what is pointed to by the word pointer char getWord(void) const { return *m_word_ptr; } /// returns what is pointed to by the definition pointer char getDef(void) const { return *m_def_ptr; } private: char *m_word_ptr; /** points to a dictionary word */ char *m_def_ptr; /** points to a dictionary definition */ }; #endif -------------------------------------------------- // /////////////////////////////////////////////////////////////////////////////////// // DEPENDENCIES (custom header files) #include "entry.h" /// class header file // /////////////////////////////////////////////////////////////////////////////////// // PUBLIC FUNCTIONS /* * only change the word member var if it is in its initial state */ void Entry::setWordPtr(char *w_ptr) { if(m_word_ptr == NULL) { m_word_ptr = w_ptr; } } /* * only change the def member var if it is in its initial state */ void Entry::setDefPtr(char *d_ptr) { if(m_def_ptr == NULL) { m_word_ptr = d_ptr; } }

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Windows Azure Service Bus Scatter-Gather Implementation

    - by Alan Smith
    One of the more challenging enterprise integration patterns that developers may wish to implement is the Scatter-Gather pattern. In this article I will show the basic implementation of a scatter-gather pattern using the topic-subscription model of the windows azure service bus. I’ll be using the implementation in demos, and also as a lab in my training courses, and the pattern will also be included in the next release of my free e-book the “Windows Azure Service Bus Developer Guide”. The Scatter-Gather pattern answers the following scenario. How do you maintain the overall message flow when a message needs to be sent to multiple recipients, each of which may send a reply? Use a Scatter-Gather that broadcasts a message to multiple recipients and re-aggregates the responses back into a single message. The Enterprise Integration Patterns website provides a description of the Scatter-Gather pattern here.   The scatter-gather pattern uses a composite of the publish-subscribe channel pattern and the aggregator pattern. The publish-subscribe channel is used to broadcast messages to a number of receivers, and the aggregator is used to gather the response messages and aggregate them together to form a single message. Scatter-Gather Scenario The scenario for this scatter-gather implementation is an application that allows users to answer questions in a poll based voting scenario. A poll manager application will be used to broadcast questions to users, the users will use a voting application that will receive and display the questions and send the votes back to the poll manager. The poll manager application will receive the users’ votes and aggregate them together to display the results. The scenario should be able to scale to support a large number of users.   Scatter-Gather Implementation The diagram below shows the overall architecture for the scatter-gather implementation.       Messaging Entities Looking at the scatter-gather pattern diagram it can be seen that the topic-subscription architecture is well suited for broadcasting a message to a number of subscribers. The poll manager application can send the question messages to a topic, and each voting application can receive the question message on its own subscription. The static limit of 2,000 subscriptions per topic in the current release means that 2,000 voting applications can receive question messages and take part in voting. The vote messages can then be sent to the poll manager application using a queue. The voting applications will send their vote messages to the queue, and the poll manager will receive and process the vote messages. The questions topic and answer queue are created using the Windows Azure Developer Portal. Each instance of the voting application will create its own subscription in the questions topic when it starts, allowing the question messages to be broadcast to all subscribing voting applications. Data Contracts Two simple data contracts will be used to serialize the questions and votes as brokered messages. The code for these is shown below.   [DataContract] public class Question {     [DataMember]     public string QuestionText { get; set; } }     To keep the implementation of the voting functionality simple and focus on the pattern implementation, the users can only vote yes or no to the questions.   [DataContract] public class Vote {     [DataMember]     public string QuestionText { get; set; }       [DataMember]     public bool IsYes { get; set; } }     Poll Manager Application The poll manager application has been implemented as a simple WPF application; the user interface is shown below. A question can be entered in the text box, and sent to the topic by clicking the Add button. The topic and subscriptions used for broadcasting the messages are shown in a TreeView control. The questions that have been broadcast and the resulting votes are shown in a ListView control. When the application is started any existing subscriptions are cleared form the topic, clients are then created for the questions topic and votes queue, along with background workers for receiving and processing the vote messages, and updating the display of subscriptions.   public MainWindow() {     InitializeComponent();       // Create a new results list and data bind it.     Results = new ObservableCollection<Result>();     lsvResults.ItemsSource = Results;       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Clear out any old subscriptions.     NamespaceManager = new NamespaceManager(serviceBusUri, credentials);     IEnumerable<SubscriptionDescription> subs =         NamespaceManager.GetSubscriptions(AccountDetails.ScatterGatherTopic);     foreach (SubscriptionDescription sub in subs)     {         NamespaceManager.DeleteSubscription(sub.TopicPath, sub.Name);     }       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Create the topic and queue clients.     ScatterGatherTopicClient =         factory.CreateTopicClient(AccountDetails.ScatterGatherTopic);     ScatterGatherQueueClient =         factory.CreateQueueClient(AccountDetails.ScatterGatherQueue);       // Start the background worker threads.     VotesBackgroundWorker = new BackgroundWorker();     VotesBackgroundWorker.DoWork += new DoWorkEventHandler(ReceiveMessages);     VotesBackgroundWorker.RunWorkerAsync();       SubscriptionsBackgroundWorker = new BackgroundWorker();     SubscriptionsBackgroundWorker.DoWork += new DoWorkEventHandler(UpdateSubscriptions);     SubscriptionsBackgroundWorker.RunWorkerAsync(); }     When the poll manager user nters a question in the text box and clicks the Add button a question message is created and sent to the topic. This message will be broadcast to all the subscribing voting applications. An instance of the Result class is also created to keep track of the votes cast, this is then added to an observable collection named Results, which is data-bound to the ListView control.   private void btnAddQuestion_Click(object sender, RoutedEventArgs e) {     // Create a new result for recording votes.     Result result = new Result()     {         Question = txtQuestion.Text     };     Results.Add(result);       // Send the question to the topic     Question question = new Question()     {         QuestionText = result.Question     };     BrokeredMessage msg = new BrokeredMessage(question);     ScatterGatherTopicClient.Send(msg);       txtQuestion.Text = ""; }     The Results class is implemented as follows.   public class Result : INotifyPropertyChanged {     public string Question { get; set; }       private int m_YesVotes;     private int m_NoVotes;       public event PropertyChangedEventHandler PropertyChanged;       public int YesVotes     {         get { return m_YesVotes; }         set         {             m_YesVotes = value;             NotifyPropertyChanged("YesVotes");         }     }       public int NoVotes     {         get { return m_NoVotes; }         set         {             m_NoVotes = value;             NotifyPropertyChanged("NoVotes");         }     }       private void NotifyPropertyChanged(string prop)     {         if(PropertyChanged != null)         {             PropertyChanged(this, new PropertyChangedEventArgs(prop));         }     } }     The INotifyPropertyChanged interface is implemented so that changes to the number of yes and no votes will be updated in the ListView control. Receiving the vote messages from the voting applications is done asynchronously, using a background worker thread.   // This runs on a background worker. private void ReceiveMessages(object sender, DoWorkEventArgs e) {     while (true)     {         // Receive a vote message from the queue         BrokeredMessage msg = ScatterGatherQueueClient.Receive();         if (msg != null)         {             // Deserialize the message.             Vote vote = msg.GetBody<Vote>();               // Update the results.             foreach (Result result in Results)             {                 if (result.Question.Equals(vote.QuestionText))                 {                     if (vote.IsYes)                     {                         result.YesVotes++;                     }                     else                     {                         result.NoVotes++;                     }                     break;                 }             }               // Mark the message as complete.             msg.Complete();         }       } }     When a vote message is received, the result that matches the vote question is updated with the vote from the user. The message is then marked as complete. A second background thread is used to update the display of subscriptions in the TreeView, with a dispatcher used to update the user interface. // This runs on a background worker. private void UpdateSubscriptions(object sender, DoWorkEventArgs e) {     while (true)     {         // Get a list of subscriptions.         IEnumerable<SubscriptionDescription> subscriptions =             NamespaceManager.GetSubscriptions(AccountDetails.ScatterGatherTopic);           // Update the user interface.         SimpleDelegate setQuestion = delegate()         {             trvSubscriptions.Items.Clear();             TreeViewItem topicItem = new TreeViewItem()             {                 Header = AccountDetails.ScatterGatherTopic             };               foreach (SubscriptionDescription subscription in subscriptions)             {                 TreeViewItem subscriptionItem = new TreeViewItem()                 {                     Header = subscription.Name                 };                 topicItem.Items.Add(subscriptionItem);             }             trvSubscriptions.Items.Add(topicItem);               topicItem.ExpandSubtree();         };         this.Dispatcher.BeginInvoke(DispatcherPriority.Send, setQuestion);           Thread.Sleep(3000);     } }       Voting Application The voting application is implemented as another WPF application. This one is more basic, and allows the user to vote “Yes” or “No” for the questions sent by the poll manager application. The user interface for that application is shown below. When an instance of the voting application is created it will create a subscription in the questions topic using a GUID as the subscription name. The application can then receive copies of every question message that is sent to the topic. Clients for the new subscription and the votes queue are created, along with a background worker to receive the question messages. The voting application is set to receiving mode, meaning it is ready to receive a question message from the subscription.   public MainWindow() {     InitializeComponent();       // Set the mode to receiving.     IsReceiving = true;       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Create a subcription for this instance     NamespaceManager mgr = new NamespaceManager(serviceBusUri, credentials);     string subscriptionName = Guid.NewGuid().ToString();     mgr.CreateSubscription(AccountDetails.ScatterGatherTopic, subscriptionName);       // Create the subscription and queue clients.     ScatterGatherSubscriptionClient = factory.CreateSubscriptionClient         (AccountDetails.ScatterGatherTopic, subscriptionName);     ScatterGatherQueueClient =         factory.CreateQueueClient(AccountDetails.ScatterGatherQueue);       // Start the background worker thread.     BackgroundWorker = new BackgroundWorker();     BackgroundWorker.DoWork += new DoWorkEventHandler(ReceiveMessages);     BackgroundWorker.RunWorkerAsync(); }     I took the inspiration for creating the subscriptions in the voting application from the chat application that uses topics and subscriptions blogged by Ovais Akhter here. The method that receives the question messages runs on a background thread. If the application is in receive mode, a question message will be received from the subscription, the question will be displayed in the user interface, the voting buttons enabled, and IsReceiving set to false to prevent more questing from being received before the current one is answered.   // This runs on a background worker. private void ReceiveMessages(object sender, DoWorkEventArgs e) {     while (true)     {         if (IsReceiving)         {             // Receive a question message from the topic.             BrokeredMessage msg = ScatterGatherSubscriptionClient.Receive();             if (msg != null)             {                 // Deserialize the message.                 Question question = msg.GetBody<Question>();                   // Update the user interface.                 SimpleDelegate setQuestion = delegate()                 {                     lblQuestion.Content = question.QuestionText;                     btnYes.IsEnabled = true;                     btnNo.IsEnabled = true;                 };                 this.Dispatcher.BeginInvoke(DispatcherPriority.Send, setQuestion);                 IsReceiving = false;                   // Mark the message as complete.                 msg.Complete();             }         }         else         {             Thread.Sleep(1000);         }     } }     When the user clicks on the Yes or No button, the btnVote_Click method is called. This will create a new Vote data contract with the appropriate question and answer and send the message to the poll manager application using the votes queue. The user voting buttons are then disabled, the question text cleared, and the IsReceiving flag set to true to allow a new message to be received.   private void btnVote_Click(object sender, RoutedEventArgs e) {     // Create a new vote.     Vote vote = new Vote()     {         QuestionText = (string)lblQuestion.Content,         IsYes = ((sender as Button).Content as string).Equals("Yes")     };       // Send the vote message.     BrokeredMessage msg = new BrokeredMessage(vote);     ScatterGatherQueueClient.Send(msg);       // Update the user interface.     lblQuestion.Content = "";     btnYes.IsEnabled = false;     btnNo.IsEnabled = false;     IsReceiving = true; }     Testing the Application In order to test the application, an instance of the poll manager application is started; the user interface is shown below. As no instances of the voting application have been created there are no subscriptions present in the topic. When an instance of the voting application is created the subscription will be displayed in the poll manager. Now that a voting application is subscribing, a questing can be sent from the poll manager application. When the message is sent to the topic, the voting application will receive the message and display the question. The voter can then answer the question by clicking on the appropriate button. The results of the vote are updated in the poll manager application. When two more instances of the voting application are created, the poll manager will display the new subscriptions. More questions can then be broadcast to the voting applications. As the question messages are queued up in the subscription for each voting application, the users can answer the questions in their own time. The vote messages will be received by the poll manager application and aggregated to display the results. The screenshots of the applications part way through voting are shown below. The messages for each voting application are queued up in sequence on the voting application subscriptions, allowing the questions to be answered at different speeds by the voters.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >