Search Results

Search found 151 results on 7 pages for 'boston'.

Page 6/7 | < Previous Page | 2 3 4 5 6 7  | Next Page >

  • Internet slowed down because of SQUID Server setup

    - by Ranjith Kumar
    Recently I have setup a squid server for our office. I have computer (A) with two ethernet cards, one for internet and the second one for local networkIt has Ubuntu server OS with squid-server and dhcp3-server installedI have added few iptable rules to work like a router and redirect all http traffic to 3128 port This link is my reference. Everything worked fine for 2 days. All of a sudden internet speed went down drastically. When I connected the internet cable to my laptop to test the internet speed it was fine. Again when I reconnected it back to computer A everything was normal. This happened 4 times in a week. Could anyone here please help me why the internet speed is going down and it becomes normal when I reconnect the cable. EDIT: Rebooting the system (computer A) didn't make a difference. I have changed iptables so that http traffic doesn't redirect to 3128 port any further, still no change in the internet speed. I think the problem is not with squid but with something else. Here are my iptable rules SQUID_SERVER="10.1.1.1" INTERNET="eth1" LAN_IN="eth0" SQUID_PORT="3128" PROXYSERVERS=(Atlanta Baltimore Boston Chicago Dallas Denver Houston KansasCity LosAngeles Miami NewYork Philadelphia Phoenix SanAntonio SanDiego SanJose Seattle Washington) SERVERLEN=${#PROXYSERVERS[*]} I=0 iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X modprobe ip_conntrack modprobe ip_conntrack_ftp echo 1 /proc/sys/net/ipv4/ip_forward iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT while [ $I -lt $SERVERLEN ]; do iptables -t nat -A PREROUTING -i $LAN_IN -p tcp -d ${PROXYSERVERS[$I]}.wonderproxy.com --dport 80 -j ACCEPT let I++ done iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT iptables -A INPUT --protocol tcp --dport 80 -j ACCEPT iptables -A INPUT --protocol tcp --dport 443 -j ACCEPT iptables -A INPUT --protocol tcp --dport 22 -j ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP

    Read the article

  • Would anyone tell me how to fetch the media:thumb element's attribute from a json feed?

    - by ash
    I made a yahoo pipe that pulls up the atoms as json format; however, I can fetch and display all the elements in my html page except for the element's attribute. Would anyone tell me how to fetch the media:thumb element's attribute from a json feed? I am pasting the html page's code with javascript. If you save the html page and then view it in browser, you will see that all the necessary elements get output at html page except for the media:thumb as I cannot display the attribute of media:thumb when the feed is formatted as json. I am also pasting the some portion of the json feed so that you can have an idea what i am talking about. Please tell me how to retrieve attribute from media:thumb element of a json feed by using plain javascript but no server side code or javascript library. Thank you. function getFeed(feed){ var newScript = document.createElement('script'); newScript.type = 'text/javascript'; newScript.src = 'http://pipes.yahoo.com/pipes/pipe.run?_id=40616620df99780bceb3fe923cecd216&_render=json&_callback=piper'; document.getElementsByTagName("head")[0].appendChild(newScript); } function piper(feed){ var tmp=''; for (var i=0; i'; tmp+=feed.value.items[i].title+''; tmp+=feed.value.items[i].author.name+''; tmp+=feed.value.items[i].published+''; if (feed.value.items[i].description) { tmp+=feed.value.items[i].description+''; } tmp+='<hr>'; } document.getElementById('rssLayer').innerHTML=tmp; } </script> bchnbc .............................................................. Some portion of the json feed that gets generated by yahoo pipe .............................................................. piper({"count":2,"value":{"title":"myPipe","description":"Pipes Output","link":"http:\/\/pipes.yahoo.com\/pipes\/pipe.info?_id=f7f4175d493cf1171aecbd3268fea5ee","pubDate":"Fri, 02 Apr 2010 17:59:22 -0700","generator":"http:\/\/pipes.yahoo.com\/pipes\/","callback":"piper", "items": [{ "rights":"Attribution - Noncommercial - No Derivative Works", "link":"http:\/\/vodo.net\/mixtape1", "y:id":{"value":null,"permalink":"true"}, "content":{"content":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.","type":"text"}, "author": {"name":"Various"}, "description":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.", "media:thumbnail": { "url":"http:\/\/vodo.net\/\/thumbnails\/Mixtape1.jpg" }, "published":"2010-03-08-09:20:20 PM", "format": { "audio_bitrate":null, "width":"608", "xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat", "channels":"2", "samplerate":"44100.0", "duration":"3092.36", "height":"352", "size":"733925376.0", "framerate":"25.0", "audio_codec":"mp3", "video_bitrate":"1898.0", "video_codec":"XVID", "pixel_aspect_ratio":"16:9" }, "y:title":"Mixtape #1: VODO's favourite short films", "title":"Mixtape #1: VODO's favourite short films", "id":null, "pubDate":"2010-03-08-09:20:20 PM", "y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }}, {"rights":"Attribution - Noncommercial - No Derivative Works","link":"http:\/\/vodo.net\/gilbert","y:id":{"value":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","permalink":"true"},"content":{"content":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","type":"text"},"author":{"name":"Nathaniel Hansen"},"description":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","media:thumbnail":{"url":"http:\/\/vodo.net\/\/thumbnails\/gilbert.jpeg"},"published":"2010-03-03-10:37:05 AM","format":{"audio_bitrate":null,"width":"624","xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat","channels":"2","samplerate":null,"duration":"373.673","height":"352","size":"123321266.0","framerate":null,"audio_codec":"mp3","video_bitrate":null,"video_codec":"XVID","pixel_aspect_ratio":"16:9"},"y:title":"Gilbert","title":"Gilbert","id":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","pubDate":"2010-03-03-10:37:05 AM","y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }} ] }})

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • 5.1 surround sound on Acer Aspire 5738ZG with Ubuntu 11.10

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • PASS: International Travels

    - by Bill Graziano
    Nihao!  One of the largest changes PASS is going through is the the expansion outside the US and Canada.  We’ve had international chapters and events in Europe since the early 2000’s.  But nothing on the scale we’re seeing now.  Since January 1st there have been 18 SQL Saturday events outside North America and 19 events in North America.  We hope to have three international SQLRally events outside the US in FY13 (budget willing).  I don’t know the exact percentage of chapters outside the US but it’s got be 50% or higher. We recently started an effort to remake the Board to better reflect the growing global face of PASS.  This involves assigning some Board seats to geographic regions.  You can ask questions about this in our feedback forum, participate in a Twitter chat or ask questions directly of Board members.  You can email me at if you’d like to ask a question directly.  We’re doing this very slowly and deliberately in hopes that a long communication cycle gives us a chance to address all the issues that our members will raise. After the Summit we passed a budget exception allocating an extra $20,000 for Board members to travel to local events.  I think it’s important for Board members to visit new areas and talk to more of our members.  I sent out an email asking where people had attended events outside their home city.  Here’s the list I got back: Albuquerque, Amsterdam, Boston, Brisbane, Chicago, Colorado Springs, Columbus, Dallas, Houston, Jacksonville, Las Vegas, London, Louisville, Minneapolis, New York City, Orange County, Orlando, Pensacola, Perth, Philadelphia, Phoenix, Redmond, Seattle, Silicon Valley, Sydney, Tampa Bay, Vancouver, Washington DC and Wellington.  (Disclaimer: Some of this travel was paid for by employers or Board members themselves.  Some of this travel may have been completed before the Summit.  That’s still one heck of a list!) The last SQL Saturday event this fiscal year is SQL Saturday Shanghai.  And that’s one I’m attending.  This is our first event in China and is being put on in cooperation with the local Microsoft office.  Hopefully this event will be the start of a growing community in China that includes chapters, SQL Saturdays and maybe a SQLRally or two in the future.  I’m excited to speak with people that are just starting down this path and watching this community grow. I encourage you to visit the PASS Global Growth site and read through the material there.  This is the biggest change we’ve made to our governance since I’ve been on the Board.  You need to understand how it affects you and how it affects the organization. And wish me luck on the 15 hour flight to Shanghai on Friday afternoon.  Rob Farley flies from Australia to the US for PASS events multiple times per year and I don’t know how he does it so often.  I think one of these is going to wipe me out.  (And Nihao (knee-how) is Chinese for Hello.)

    Read the article

  • NFJS Central Iowa Software Symposium Des Moines Trip Report

    - by reza_rahman
    As some of you may be aware, I recently joined the well-respected US based No Fluff Just Stuff (NFJS) Tour. If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. The NFJS Central Iowa Software Symposium was held August 8 - 10 in Des Moines. The attendance at the event and my sessions was moderate by comparison to some of the other shows. It is one of the few events of it's kind that take place this part the country so it is extremely important. I had five talks total over two days, more or less back-to-back. The first one was my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. The second talk (on the second day) was our flagship Java EE 7/8 talk. Currently the talk is basically about Java EE 7 but I'm slowly evolving the talk to transform it into a Java EE 8 talk as we move forward. The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third was my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE". The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. I finishd off the event with a talk titled Building Java HTML5/WebSocket Applications with JSR 356. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. My next NFJS show is the Greater Atlanta Software Symposium on September 12 - 14. Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: September 12 - 14, Atlanta. September 19 - 21, Boston. October 17 - 19, Seattle. I hope you'll take this opportunity to get some updates on Java EE as well as the other useful content on the tour?

    Read the article

  • How do I get 5.1 surround sound working on an Acer Aspire 5738ZG?

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • perl Client-SSL-Warning: Peer certificate not verified

    - by Jeremey
    I am having trouble with a perl screenscraper to an HTTPS site. In debugging, I ran the following: print $res->headers_as_string; and in the output, I have the following line: Client-SSL-Warning: Peer certificate not verified Is there a way I can auto-accept this certificate, or is that not the problem? #!/usr/bin/perl use LWP::UserAgent; use Crypt::SSLeay::CTX; use Crypt::SSLeay::Conn; use Crypt::SSLeay::X509; use LWP::Simple qw(get); my $ua = LWP::UserAgent->new; my $req = HTTP::Request->new(GET => 'https://vzw-cat.sun4.lightsurf.net/vzwcampaignadmin/'); my $res = $ua->request($req); print $res->headers_as_string; output: Cache-Control: no-cache Connection: close Date: Tue, 01 Jun 2010 19:28:08 GMT Pragma: No-cache Server: Apache Content-Type: text/html Expires: Wed, 31 Dec 1969 16:00:00 PST Client-Date: Tue, 01 Jun 2010 19:28:09 GMT Client-Peer: 64.152.68.114:443 Client-Response-Num: 1 Client-SSL-Cert-Issuer: /O=VeriSign Trust Network/OU=VeriSign, Inc./OU=VeriSign International Server CA - Class 3/OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign Client-SSL-Cert-Subject: /C=US/ST=Massachusetts/L=Boston/O=verizon wireless/OU=TERMS OF USE AT WWW.VERISIGN.COM/RPA (C)00/CN=PSMSADMIN.VZW.COM Client-SSL-Cipher: DHE-RSA-AES256-SHA Client-SSL-Warning: Peer certificate not verified Client-Transfer-Encoding: chunked Link: <css/vtext_style.css>; rel="stylesheet"; type="text/css" Set-Cookie: JSESSIONID=DE6C99EA2F3DD1D4DF31456B94F16C90.vz3; Path=/vzwcampaignadmin; Secure Title: Verizon Wireless - Campaign Administrator

    Read the article

  • Zendx JQuery Autocomplete

    - by emeraldjava
    I've been trying to get the Zend Jquery autocomplete function working, when i noticed this section in the Zend documentation. The following UI widgets are available as form view helpers. Make sure you use the correct version of jQuery UI library to be able to use them. The Google CDN only offers jQuery UI up to version 1.5.2. Some other components are only available from jQuery UI SVN, since they have been removed from the announced 1.6 release. autoComplete($id, $value, $params, $attribs): The AutoComplete View helper will be included in a future jQuery UI version (currently only via jQuery SVN) and creates a text field and registeres it to have auto complete functionality. The completion data source has to be given as jQuery related parameters 'url' or 'data' as described in the jQuery UI manual. Does anybody know which svn url tag or branch i need to download to get a javascript file with the autocomplete functions available in it? At the moment, my Bootstrap.php has $view->addHelperPath('ZendX/JQuery/View/Helper/', 'ZendX_JQuery_View_Helper'); $view->jQuery()->enable(); $view->jQuery()->uiEnable(); Zend_Controller_Action_HelperBroker::addHelper( new ZendX_JQuery_Controller_Action_Helper_AutoComplete() ); // Add it to the ViewRenderer $viewRenderer = new Zend_Controller_Action_Helper_ViewRenderer(); $viewRenderer->setView($view); Zend_Controller_Action_HelperBroker::addHelper($viewRenderer); In my layout, i define the jquery ui version i want <?php echo $this->jQuery() ->setUiVersion('1.7.2');?> Finally my index.phtml has the autocomplete widget <p><?php $data = array('New York', 'Tokyo', 'Berlin', 'London', 'Sydney', 'Bern', 'Boston', 'Baltimore'); ?> <?php echo $this->autocomplete("ac1", "", array('data' => $data));?></p> I'm using Zend 1.8.3 atm.

    Read the article

  • Can I perform a search on mail server in Java?

    - by twofivesevenzero
    I am trying to perform a search of my gmail using Java. With JavaMail I can do a message by message search like so: Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); Session session = Session.getDefaultInstance(props, null); Store store = session.getStore("imaps"); store.connect("imap.gmail.com", "myUsername", "myPassword"); Folder inbox = store.getFolder("Inbox"); inbox.open(Folder.READ_ONLY); SearchTerm term = new SearchTerm() { @Override public boolean match(Message mess) { try { return mess.getContent().toString().toLowerCase().indexOf("boston") != -1; } catch (IOException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } catch (MessagingException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } return false; } }; Message[] searchResults = inbox.search(term); for(Message m:searchResults) System.out.println("MATCHED: " + m.getFrom()[0]); But this requires downloading each message. Of course I can cache all the results, but this becomes a storage concern with large gmail boxes and also would be very slow (I can only imagine how long it would take to search through gigabytes of text...). So my question is, is there a way of searching through mail on the server, a la gmail's search field? Maybe through Microsoft Exchange? Hours of Googling has turned up nothing.

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • Add Widget via Action in Toolbar

    - by Geertjan
    The question of the day comes from Vadim, who asks on the NetBeans Platform mailing list: "Looking for example showing how to add Widget to Scene, e.g. by toolbar button click." Well, the solution is very similar to this blog entry, where you see a solution provided by Jesse Glick for VisiTrend in Boston: https://blogs.oracle.com/geertjan/entry/zoom_capability Other relevant articles to read are as follows: http://netbeans.dzone.com/news/which-netbeans-platform-action http://netbeans.dzone.com/how-to-make-context-sensitive-actions Let's go through it step by step, with this result in the end, a solution involving 4 classes split (optionally, since a central feature of the NetBeans Platform is modularity) across multiple modules: The Customer object has a "name" String and the Droppable capability has a method "doDrop" which takes a Customer object: public interface Droppable {    void doDrop(Customer c);} In the TopComponent, we use "TopComponent.associateLookup" to publish an instance of "Droppable", which creates a new LabelWidget and adds it to the Scene in the TopComponent. Here's the TopComponent constructor: public CustomerCanvasTopComponent() {    initComponents();    setName(Bundle.CTL_CustomerCanvasTopComponent());    setToolTipText(Bundle.HINT_CustomerCanvasTopComponent());    final Scene scene = new Scene();    final LayerWidget layerWidget = new LayerWidget(scene);    Droppable d = new Droppable(){        @Override        public void doDrop(Customer c) {            LabelWidget customerWidget = new LabelWidget(scene, c.getTitle());            customerWidget.getActions().addAction(ActionFactory.createMoveAction());            layerWidget.addChild(customerWidget);            scene.validate();        }    };    scene.addChild(layerWidget);    jScrollPane1.setViewportView(scene.createView());    associateLookup(Lookups.singleton(d));} The Action is displayed in the toolbar and is enabled only if a Droppable is currently in the Lookup: @ActionID(        category = "Tools",        id = "org.customer.controler.AddCustomerAction")@ActionRegistration(        iconBase = "org/customer/controler/icon.png",        displayName = "#AddCustomerAction")@ActionReferences({    @ActionReference(path = "Toolbars/File", position = 300)})@NbBundle.Messages("AddCustomerAction=Add Customer")public final class AddCustomerAction implements ActionListener {    private final Droppable context;    public AddCustomerAction(Droppable droppable) {        this.context = droppable;    }    @Override    public void actionPerformed(ActionEvent ev) {        NotifyDescriptor.InputLine inputLine = new NotifyDescriptor.InputLine("Name:", "Data Entry");        Object result = DialogDisplayer.getDefault().notify(inputLine);        if (result == NotifyDescriptor.OK_OPTION) {            Customer customer = new Customer(inputLine.getInputText());            context.doDrop(customer);        }    }} Therefore, when the Properties window, for example, is selected, the Action will be disabled. (See the Zoomable example referred to in the link above for another example of this.) As you can see above, when the Action is invoked, a Droppable must be available (otherwise the Action would not have been enabled). The Droppable is obtained in the Action and a new Customer object is passed to its "doDrop" method. The above in pictures, take note of the enablement of the toolbar button with the red dot, on the extreme left of the toolbar in the screenshots below: The above shows the JButton is only enabled if the relevant TopComponent is active and, when the Action is invoked, the user can enter a name, after which a new LabelWidget is created in the Scene. The source code of the above is here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/WidgetCreationFromAction Note: Showing this as an MVC example is slightly misleading because, depending on which model object ("Customer" and "Droppable") you're looking at, the V and the C are different. From the point of view of "Customer", the TopComponent is the View, while the Action is the Controler, since it determines when the M is displayed. However, from the point of view of "Droppable", the TopComponent is the Controler, since it determines when the Action, i.e., which is in this case the View, displays the presence of the M.

    Read the article

  • scrollTo (jQuery) won't work in firefox

    - by William
    For some reason, firefox seems to ignore my scrollTo function even though it works in chrome and safari. Here's an example link: http://blog.rainbird.me/post/2358248459/blowholes-are-awesome Chrome and Safari will automatically scroll to the top of the image (with an offset of 20 pixels) It doesn't work in firefox. I'm baffled! code: $(document).ready(function() { $(".photoShell img").lazyload({ placeholder: "http://william.rainbird.me/boston-polaroid/white.gif", threshold: 200 }); window.viewport = { height: function() { return $(window).height(); }, width: function() { return $(window).width(); }, scrollTop: function() { return $(window).scrollTop(); }, scrollLeft: function() { return $(window).scrollLeft(); } }; $(".photoShell img").hide(); $(".photoShell .caption").hide(); $(".photoShell img").load(function() { var maxWidth = viewport.width() - 40; // Max width for the image if(maxWidth > 960){ maxWidth = 960; } var maxHeight = viewport.height() - 50; // Max height for the image var ratio = 0; // Used for aspect ratio var width = $(this).width(); // Current image width var height = $(this).height(); // Current image height // Check if the current width is larger than the max if(width > maxWidth){ ratio = maxWidth / width; // get ratio for scaling image $(this).css("width", maxWidth); // Set new width $(this).css("height", height * ratio); // Scale height based on ratio height = height * ratio; // Reset height to match scaled image width = width * ratio; // Reset width to match scaled image } // Check if current height is larger than max if(height > maxHeight){ ratio = maxHeight / height; // get ratio for scaling image $(this).css("height", maxHeight); // Set new height $(this).css("width", width * ratio); // Scale width based on ratio width = width * ratio; // Reset width to match scaled image } $(this).parents('div.photoShell').css("width", $(this).width() + 22); $(this).parents('div.photoShell').addClass('loaded'); $(this).next(".caption").show(); var scrollNum = $(this).parents('div.photoShell').offset().top; $.scrollTo(scrollNum - 20, {duration: 700, axis:"y"}); $(this).fadeIn("slow"); }).each(function() { // trigger the load event in case the image has been cached by the browser if(this.complete) $(this).trigger('load'); });

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • With NHibernate, how can I add a child object when updating a parent object?

    - by BMZ
    I have a simple Parent/Child relationship between a Person object and an Address object. The Person object exists in the DB. After doing a Get on the Person, I add a new Address object to the Address sub-object list of the parent, and do some other updates to the Person object. Finally, I do an Update on the Person object. With a SQL trace window, I can see the update to the Person object to the Person table and the Insert of the Address record to the Address table. The issue is that, after the update is performed, the AddressId (primary key on the Address object) is still set to 0, which is what it defaults to when you first initialize the Address object. I have verified that when I do an Add, this value is set correctly. Is this a known issue when trying to add sub-objects as part of an NHibernate UPDATE? Sample code and mapping files are below Thanks <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.Person,BusinessEntities.Wellness" table="Person" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="Personid" column="PersonID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`"/> <property type="int" not-null="true" name="Customerid" column="`CustomerID`" /> <property type="AnsiString" not-null="true" length="9" name="Ssn" column="`SSN`" /> <property type="AnsiString" not-null="true" length="30" name="FirstName" column="`FirstName`" /> <property type="AnsiString" not-null="true" length="35" name="LastName" column="`LastName`" /> <property type="AnsiString" length="1" name="MiddleInitial" column="`MiddleInitial`" /> <property type="DateTime" name="DateOfBirth" column="`DateOfBirth`" /> <bag name="PersonAddresses" inverse="true" lazy="true" cascade="all"> <key column="PersonID" /> <one-to-many class="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" / </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" table="PersonAddress" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="PersonAddressId" column="PersonAddressID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`" /> <property type="AnsiString" not-null="true" length="1" name="AddressTypeid" column="`AddressTypeID`" /> <property type="AnsiString" not-null="true" length="60" name="AddressLine1" column="`AddressLine1`" /> <property type="AnsiString" length="60" name="AddressLine2" column="`AddressLine2`" /> <property type="AnsiString" length="60" name="City" column="`City`" /> <property type="AnsiString" length="2" name="UsStateId" column="`USStateID`" /> <property type="AnsiString" length="5" name="UsPostalCodeId" column="`USPostalCodeID`" /> <many-to-one name="Person" cascade="none" column="PersonID" /> </class> </hibernate-mapping> Person newPerson = new Person(); newPerson.PersonName = "John Doe"; newPerson.SSN = "111111111"; newPerson.CreatedBy = "RJC"; newPerson.CreatedDate = DateTime.Today; personDao.AddPerson(newPerson); Person updatePerson = personDao.GetPerson(newPerson.PersonId); updatePerson.PersonAddresses = new List<PersonAddress>(); PersonAddress addr = new PersonAddress(); addr.AddressLine1 = "1 Main St"; addr.City = "Boston"; addr.State = "MA"; addr.Zip = "12345"; updatePerson.PersonAddresses.Add(addr); personDao.UpdatePerson(updatePerson); int addressID = updatePerson.PersonAddresses[0].AddressId;

    Read the article

  • mine phrases (up to 3 words) from a given text

    - by DS_web_developer
    I asked before for a simple solution to my problem (using sphinx search service) but I got nowhere... someone has kindly provided me with this code <?php /** * $Project: GeoGraph $ * $Id$ * * GeoGraph geographic photo archive project * This file copyright (C) 2005 Barry Hunter ([email protected]) * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** * Provides the methods for updating the worknet tables * * @package Geograph * @author Barry Hunter <[email protected]> * @version $Revision$ */ function addTwoLetterPhrase($phrase) { global $w2; $w2[$phrase] = (isset($w2[$phrase]))?($w2[$phrase]+1):1; } function addThreeLetterPhrase($phrase) { global $w3; $w3[$phrase] = (isset($w3[$phrase]))?($w3[$phrase]+1):1; } function updateWordnet(&$db,$text,$field,$id) { global $w1,$w2,$w3; $alltext = strtolower(preg_replace('/\W+/',' ',str_replace("'",'',$text))); if (strlen($text)< 1) return; $words = preg_split('/ /',$alltext); $w1 = array(); $w2 = array(); $w3 = array(); //build a list of one word phrases foreach ($words as $word) { $w1[$word] = (isset($w1[$word]))?($w1[$word]+1):1; } //build a list of two word phrases $text = $alltext; $text = preg_replace('/(\w+) (\w+)/e','addTwoLetterPhrase("$1 $2")',$text); $text = $alltext; $text = preg_replace('/(\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+)/e','addTwoLetterPhrase("$1 $2")',$text); //build a list of three word phrases $text = $alltext; $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); $text = $alltext; $text = preg_replace('/(\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); $text = $alltext; $text = preg_replace('/(\w+) (\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); foreach ($w1 as $word=>$count) { $db->Execute("insert into wordnet1 set gid = $id,words = '$word',$field = $count");// ON DUPLICATE KEY UPDATE $field=$field+$count"); } foreach ($w2 as $word=>$count) { $db->Execute("insert into wordnet2 set gid = $id,words = '$word',$field = $count"); } foreach ($w3 as $word=>$count) { $db->Execute("insert into wordnet3 set gid = $id,words = '$word',$field = $count"); } } ?> It works fine and does almost exactly what I need....... except.... it is not utf8 friendly... I mean... it splits whole words into parts (on special chars) where it shouldn't! so my guess is I should use multibyte functions instead of regular preg_replace... I tried to replace preg_replace with mb_ereg_replace but it is not working as it should... at least not for 2 and 3 words phrases any ideas?

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • Delphi - Proper way to page though data.

    - by Brad
    I have a string list (TStrings) that has a couple thousand items in it. I need to process them in groups of 100. I basically want to know what the best way to do the loop is in Delphi. I'm hitting a brick wall when I'm trying to figure it out. Thanks unit Unit2; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm2 = class(TForm) Memo1: TMemo; Memo2: TMemo; Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form2: TForm2; implementation Uses math; {$R *.dfm} procedure TForm2.Button1Click(Sender: TObject); var I:Integer; pages:Integer; str:string; begin pages:= ceil(memo1.Lines.Count/100) ; memo2.Lines.add('Total Pages: '+inttostr(pages)); memo2.Lines.add('Total Items: '+inttostr(memo1.Lines.Count)); // Should just do in batches of 100 VS entire list for I := 0 to memo1.lines.Count - 1 do begin if str '' then str:= str+#10+ memo1.Lines.Strings[i] else str:= memo1.Lines.Strings[i]; end; //I need to stop here every 100 items, then process the items. memo2.Lines.Add(str); end; end. Example form object Form2: TForm2 Left = 0 Top = 0 Caption = 'Form2' ClientHeight = 245 ClientWidth = 527 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False PixelsPerInch = 96 TextHeight = 13 object Memo1: TMemo Left = 16 Top = 8 Width = 209 Height = 175 Lines.Strings = ( '4xlt columbia thunder storm jacket' '5 things about thunder storms' 'a thunder storm with a lot of thunder ' 'and lighting sccreensaver' 'a thunder storm with a lot of thunder ' 'and lighting screensaver with no nag ' 'screens' 'all about thunder storms' 'all about thunderstorms for kids' 'amazing tornado videos and ' 'thunderstorm videos' 'are thunder storms louder in ohio?' 'bad thunder storms' 'bathing in thunder storm' 'best thunderstorm pictures' 'cartoon thunder storms' 'celtic thunder storm' 'central valley thunder storm' 'chicago thunderstorm pictures' 'cool thunderstorm pictures' 'current thunderstorm warnings' 'does thunder storms in december mean ' 'snow will be coming' 'facts about thunderstorms for kids' 'facts on thunderstorms for kids' 'fedex thunderstorm video' 'florida thunderstorms facts' 'free relaxing thunderstorm music' 'free soothing thunderstorm sounds ' 'online' 'free thunderstorm mp3' 'free thunderstorm mp3 download' 'free thunderstorm mp3 downloads' 'free thunderstorm mp3s' 'free thunderstorm music' 'free thunderstorm pictures' 'free thunderstorm sound effects' 'free thunderstorm sounds' 'free thunderstorm sounds cd' 'free thunderstorm sounds mp3' 'free thunderstorm sounds online' 'free thunderstorm soundscape' 'free thunderstorm video' 'free thunderstorm video download' 'free thunderstorm videos' 'god of storm and thunder' 'horses storm thunder rain' 'how do thunder storms form' 'how far away is a thunder storm' 'how long do thunder storms last' 'ice cube in a thunder storm' 'indoor thunderstorm safety tips' 'information about thunderstorms for kids' 'interesting thunderstorm facts' 'is it dangerous to shower during thunder ' 'storm' 'is there frequently thunder during snow ' 'storms' 'isolated thunderstorms' 'it'#39's just a thunder storm baby there is ' 'nothing you should fear lyrics' 'lightning & thunder storm safety' 'lightning and thunderstorm facts' 'lightning and thunderstorms facts' 'lightning and thunderstorms for kids' 'listen to thunderstorm sounds online' 'mississauga thunder storm' 'nature sounds free mp3 thunder storm' 'only about thunderstorms facts' 'original storm deep thunderstick' 'phone use during thunder storms' 'pictures of thunderstorms' 'pocono thunder storm' 'posters of thunder storms' 'power rangers ninja storm' 'power rangers thunder storm' 'power rangers thunder storm cast' 'power rangers thunder storm games' 'power rangers thunder storm morphers' 'power rangers thunder storm part 1' 'power rangers thunder storm part 2' 'power rangers thunderstorm' 'power rangers thunderstorm cannon' 'power rangers thunderstorm deluxe ' 'megazord' 'power rangers thunderstorm games' 'power rangers thunderstorm megazord' 'power rangers thunderstorm part 2' 'power rangers thunderstorm pictures' 'power rnager ninja storm thunder staff' 'powerful thunder and lightning storms' 'precambrian thunder storms' 'rain thunderstorm mp3' 'rain thunderstorm pictures' 'relaxing thunderstorm music' 'reminds me of ohio river thunder lighten ' 'storms' 'sacramento thunder storm' 'safety tips for when your caught in a ' 'thunder storm' 'scattered thunderstorms' 'schemer puts his head in the thunder ' 'storm' 'sedative thunder storm' 'server thunder storms' 'severe supercell thunderstorm pictures' 'severe thunder storm pictures' 'severe thunder storms' 'severe thunderstorm facts' 'severe thunderstorm pictures' 'severe thunderstorm pictures hail' 'severe thunderstorm pictures in alberta' 'severe thunderstorm pictures tornado' 'severe thunderstorm safety' 'severe thunderstorm safety tips' 'severe thunderstorm videos' 'severe thunderstorm warning' 'severe thunderstorm warning los ' 'angeles' 'severe thunderstorm warning signs' 'severe thunderstorm warnings' 'severe thunderstorms' 'severe thunderstorms facts' 'shakespeare use thunder storm for ' 'cosmic disorder julius caesar' 'soothing thunderstorm sounds online' 'sound effects of severe thunder storm' 'sound of rain storm finger snapping ' 'thunder chorus' 'split thunder storm' 'storm 3d thunder power' 'storm dark thunder' 'storm dark thunder bowling ball' 'storm dark thunder bowling ball sale' 'storm dark thunder for sale' 'storm dark thunder pearl' 'storm dark thunder pearl bowling ball' 'storm dark thunder review' 'storm dark thunder shirt' 'storm dark thunderball' 'storm deep thunder' 'storm deep thunder 11' 'storm deep thunder 15' 'storm deep thunder 15 lure' 'storm deep thunder 2' 'storm deep thunder lures' 'storm deep thunderstick' 'storm deep thunderstick crankbaits' 'storm deep thunderstick dts09' 'storm deep thunderstick jr' 'storm deep thunderstick lures' 'storm deep thundersticks' 'storm rolling thunder 3 ball roller' 'storm rolling thunder bowling bag' 'storm rolling thunder three ball bowling ' 'bag' 'storm shallow thunder' 'storm shallow thunder 15' 'storm thunder claw' 'storm thunder craw' 'storm watches thunder' 'storms with constant lightning and ' 'thunder non-stop' 'supercell thunder storms' 'supercell thunderstorm pictures' 'supercell thunderstorms' 'swimming pools thunder storms' 'tampa + lightning strikes + thunder ' 'storms' 'texas thunderstorm pictures' 'texas thunderstorm warnings' 'thunder and lightning storm' 'thunder and lighting storms' 'thunder and lightning storms' 'thunder bay snow storm video' 'thunder storm' 'thunder storm and windmill' 'thunder storm cd' 'thunder storm cloud' 'thunder storm clouds' 'thunder storm dog peppermint oil' 'thunder storm in winter' 'thunder storm in winter and weather ' 'prediction' 'thunder storm lx-3 & road blaster psx ' 'download' 'thunder storm occurances' 'thunder storm photos' 'thunder storm poems' 'thunder storm safety' 'thunder storm sign' 'thunder storm sounds' 'thunder storms' 'thunder storms and deaths' 'thunder storms and ilghting' 'thunder storms and lighting' 'thunder storms cd' 'thunder storms in the arctic arctic ' 'weather' 'thunder storms in winter' 'thunder storms on you tub' 'thunder storms pics' 'thunder storms with rain' 'thunderstorm' 'thunderstorm backgrounds' 'thunderstorm capital' 'thunderstorm capital 2008 dorfman' 'thunderstorm capital in boston' 'thunderstorm capital llc' 'thunderstorm capital of canada' 'thunderstorm capital of the us' 'thunderstorm capital of the world' 'thunderstorm facts' 'thunderstorm facts for kids' 'thunderstorm facts hail' 'thunderstorm facts tornadoes' 'thunderstorm mp3' 'thunderstorm mp3 download' 'thunderstorm mp3 download free' 'thunderstorm mp3 downloads' 'thunderstorm mp3 downloads free' 'thunderstorm mp3 files' 'thunderstorm mp3 free' 'thunderstorm mp3 free download' 'thunderstorm mp3 free downloads' 'thunderstorm mp3 torrent' 'thunderstorm mp3s' 'thunderstorm music' 'thunderstorm music cd' 'thunderstorm music downloads' 'thunderstorm music free' 'thunderstorm music playlists' 'thunderstorm music rain' 'thunderstorm pics' 'thunderstorm pictures' 'thunderstorm pictures for kids' 'thunderstorm safety' 'thunderstorm safety for kids' 'thunderstorm safety precautions' 'thunderstorm safety procedures' 'thunderstorm safety rules' 'thunderstorm safety tips' 'thunderstorm safety tips for kids' 'thunderstorm safety tips shelter' 'thunderstorm safety tips trees' 'thunderstorm sound effects' 'thunderstorm sound effects cd' 'thunderstorm sound effects download' 'thunderstorm sound effects free' 'thunderstorm sound effects free ' 'download' 'thunderstorm sound effects free music ' 'feature audio' 'thunderstorm sound effects mp3' 'thunderstorm sound effects rain' 'thunderstorm sounds' 'thunderstorm sounds cd' 'thunderstorm sounds download' 'thunderstorm sounds for sleep' 'thunderstorm sounds for sleeping' 'thunderstorm sounds free' 'thunderstorm sounds free download' 'thunderstorm sounds free downloads' 'thunderstorm sounds mp3' 'thunderstorm sounds mp3 download' 'thunderstorm sounds mp3 free' 'thunderstorm sounds online' 'thunderstorm sounds online for free' 'thunderstorm sounds online free' 'thunderstorm sounds sleep' 'thunderstorm sounds streaming' 'thunderstorm sounds torrent' 'thunderstorm soundscape' 'thunderstorm soundscapes' 'thunderstorm video' 'thunderstorm video clips' 'thunderstorm video download' 'thunderstorm video downloads' 'thunderstorm videos' 'thunderstorm videos for kids' 'thunderstorm videos lightning' 'thunderstorm videos online' 'thunderstorm wallpaper' 'thunderstorm warning' 'thunderstorm warning brisbane' 'thunderstorm warning definition' 'thunderstorm warning los angeles' 'thunderstorm warning san diego' 'thunderstorm warning san mateo county' 'thunderstorm warning santa barbara' 'thunderstorm warning santa clara' 'thunderstorm warning santa clara ' 'county' 'thunderstorm warning signal' 'thunderstorm warning signs' 'thunderstorm warning vs watch' 'thunderstorm warnings' 'thunderstorm warnings and watches' 'thunderstorm warnings for nj' 'thunderstorm warnings qld' 'thunderstorms' 'thunderstorms facts' 'thunderstorms facts for kids' 'thunderstorms for kids' 'tornados and thunder storms animated' 'understanding thunderstorms for kids' 'watch thunderstorm videos' 'weather underground forecast ' 'thunderstorms' 'what causes thunder storms' 'what is a thunder storm' 'where d thunder storms occur') TabOrder = 0 end object Memo2: TMemo Left = 240 Top = 8 Width = 265 Height = 129 Lines.Strings = ( 'Memo2') TabOrder = 1 end object Button1: TButton Left = 384 Top = 184 Width = 75 Height = 25 Caption = 'Button1' TabOrder = 2 OnClick = Button1Click end end

    Read the article

  • Need Help Customizing a Grammar Checking Replace Rule in Java

    - by user567785
    Hello, I am currently adding the Khmer (Cambodian) language to LanguageTool, an opensource grammar checker for OpenOffice (http://www.languagetool.org). I don't know enough Java to customize one of the scripts and wanted to make a request here asking if anyone would be willing to customize it for me (I can put link to your website at http://www.sbbic.org/lang/en-us/volunteer/ if you help). Here is the script that needs customization KhmerWordCoherencyRule.java: /* LanguageTool, a natural language style checker * Copyright (C) 2005 Daniel Naber (http://www.danielnaber.de) * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 * USA */ package de.danielnaber.languagetool.rules.km; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Locale; import java.util.Map; import java.util.ResourceBundle; import de.danielnaber.languagetool.AnalyzedSentence; import de.danielnaber.languagetool.AnalyzedToken; import de.danielnaber.languagetool.AnalyzedTokenReadings; import de.danielnaber.languagetool.JLanguageTool; import de.danielnaber.languagetool.tools.StringTools; import de.danielnaber.languagetool.rules.Category; import de.danielnaber.languagetool.rules.RuleMatch; /** * A Khmer rule that matches words or phrases which should not be used and suggests * correct ones instead. Loads the relevant words from * <code>rules/km/coherency.txt</code>, where km is a code of the language. * * @author Andriy Rysin */ public abstract class KhmerWordCoherencyRule extends KhmerRule { private static final String FILE_ENCODING = "utf-8"; private Map<String, String> wrongWords; // e.g. "????? -> "?????" private static final String FILE_NAME = "/km/coherency.txt"; public abstract String getFileName(); public String getEncoding() { return FILE_ENCODING; } /** * Indicates if the rule is case-sensitive. Default value is <code>true</code>. * @return true if the rule is case-sensitive, false otherwise. */ //in Khmer there is no case public boolean isCaseSensitive() { return false; } /** * @return the locale used for case conversion when {@link #isCaseSensitive()} is set to <code>false</code>. */ public Locale getLocale() { return Locale.getDefault(); } public KhmerWordCoherencyRule(final ResourceBundle messages) throws IOException { if (messages != null) { super.setCategory(new Category(messages.getString("category_misc"))); } wrongWords = loadWords(JLanguageTool.getDataBroker().getFromRulesDirAsStream(getFileName())); } public String getId() { return "KM_WORD_COHERENCY"; } public String getDescription() { return "Checks for wrong words/phrases"; } public String getSuggestion() { return " does not match your previous spelling of the word, use "; } public String getShort() { return "Use a consistant spelling throughout"; } public final RuleMatch[] match(final AnalyzedSentence text) { final List<RuleMatch> ruleMatches = new ArrayList<RuleMatch>(); final AnalyzedTokenReadings[] tokens = text.getTokensWithoutWhitespace(); for (int i = 1; i < tokens.length; i++) { final String token = tokens[i].getToken(); final String origToken = token; final String replacement = isCaseSensitive()?wrongWords.get(token):wrongWords.get(token.toLowerCase(getLocale())); if (replacement != null) { final String msg = token + getSuggestion() + replacement; final int pos = tokens[i].getStartPos(); final RuleMatch potentialRuleMatch = new RuleMatch(this, pos, pos + origToken.length(), msg, getShort()); if (!isCaseSensitive() && StringTools.startsWithUppercase(token)) { potentialRuleMatch.setSuggestedReplacement(StringTools.uppercaseFirstChar(replacement)); } else { potentialRuleMatch.setSuggestedReplacement(replacement); } ruleMatches.add(potentialRuleMatch); } } return toRuleMatchArray(ruleMatches); } private Map<String, String> loadWords(final InputStream file) throws IOException { final Map<String, String> map = new HashMap<String, String>(); InputStreamReader isr = null; BufferedReader br = null; try { isr = new InputStreamReader(file, getEncoding()); br = new BufferedReader(isr); String line; while ((line = br.readLine()) != null) { line = line.trim(); if (line.length() < 1) { continue; } if (line.charAt(0) == '#') { // ignore comments continue; } final String[] parts = line.split(";"); if (parts.length != 2) { throw new IOException("Format error in file " + JLanguageTool.getDataBroker().getFromRulesDirAsUrl(getFileName()) + ", line: " + line); } map.put(parts[0], parts[1]); } } finally { if (br != null) { br.close(); } if (isr != null) { isr.close(); } } return map; } public void reset() { } } Here is what I need the SimpleReplaceRule.java to do: 1 - Be able to have more than two spelling variations in the coherency.txt file (right now it can only be Word1;Word2). 2 - Find the first use of ANY of the spelling variations in a document that are found in coherency.txt and then make sure only that spelling is used throughout the document (ex. in the coherency.txt I have Word1;Word2;Word3 then in my document on the first line I write Word2. then on next line I write Word1 and Word 3 - then the grammar checker will flag Word1 and Word3 saying that I should use the spelling "Word2" instead...etc.). If anyone can help I would be grateful! Thanks for your time, Nathan

    Read the article

< Previous Page | 2 3 4 5 6 7  | Next Page >