Search Results

Search found 4330 results on 174 pages for 'yahoo co jpaqwsykcj3aulh3h1k0cy6nzs3isj'.

Page 130/174 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • some HTTPS sites getting blocked on one machine in network

    - by shadowfoxmi
    I have a few computers connected to the internet via a router. I have been having some trouble with this one Windows 7 desktop. I can browse most of the sites without any trouble but some sites where the sign in page switches to a secure connection (https), the page does not load. It's not all of the sites though. I'm able to sign into gmail and a few other services that I know use https . The sites I'm having trouble with; yahoo's sign in page and the one that I have been using to test across different systems, http://iforgot.apple.com (which switchs to https) ;this particular site, i can access from other computers on the network and my phone. I only have windows firewall running and AVG. I even tried to stopping windows firewall but it did not help. Everything was fine last week. All I have installed in the past week is VOIP softwares namely skype, ooVoo and windows live messenger. I'm not sure how to find out what's being blocked and why and how to unblock it? Any suggestions would be greatly appreciated.

    Read the article

  • Graphics Question: How do I restrict the mouse cursor to within a circle?

    - by Dan
    I'm playing with XNA. When I click the left mouse button, I record the X,Y co-ordinates. Keeping the mouse button held down, moving the mouse draws a line from this origin to the current mouse position. I've offset this into the middle of the window. Now, what I'd like to do is restrict the mouse cursor to within a circle (with a radius of N, centred on the middle of the screen). Restricting the mouse to a rectangular region is easy enough (by adjusting the origin by the difference of the mouse position and the size of the region), but I haven't a clue on how to start doing it for a circular region. Can anyone explain how to do this? Any advice on where to start would be helpful.

    Read the article

  • response from server

    - by john
    When I create request to the server: <script language="javascript" type="text/javascript"> function ajaxFunction() var ajaxRequest; try{ ajaxRequest = new XMLHttpRequest(); } catch (e){ try{ } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ alert("Your browser broke!"); return false; } } } ajaxRequest.onreadystatechange = function(){ if(ajaxRequest.readyState == 4){ document.write(ajaxRequest.responseText); document.myForm.time.value = ajaxRequest.responseText; } } ajaxRequest.open("GET", "http://www.bbc.co.uk", true); ajaxRequest.send(null); } </script> Why response is nothing? Why response isnt html code of this web site?

    Read the article

  • Skeleton framework spacing

    - by user1745014
    I tried to ask this question late lastnight but I was so sleepy i typed the question completely wrong. I'm looking to float my navigation to the right of its current position around 200px more, there is room but it wont move over. You can view the live code here - www.xronn.co.uk/hosting and here is an image to explain my issue a little more http://i.stack.imgur.com/JtL0C.png purple lines the 960px width of the site blue line shows the space free for the navigation to go and the red arrows of course shows which direction i want the navigation to go in (the right) Anyone got any clue why when I push to more to the right, the list starts to sit under each other

    Read the article

  • sync android application with website?

    - by Pranav
    //https://play.google.com/store/apps/details?id=in.co.discoverit.my_FlashCards here i launched the first version of my flash card application I am working on Flash Card application. Here i used sqlite Db to store my cards and data. Know i want to synchronize my database with website database..... So how would i do this for my application??? Please any one tell me how should i start doing this and also tell me the possible ways to do this on both device side and website side.... Its urgent for my application. Can any one help me out.... Regard, Pranav

    Read the article

  • 4D - is it any good?

    - by Pies
    Recently I found out that the company a friend of mine co-owns uses 4D, which I've never heard of before. They swear by it, but they're non-technical and what they say about it sounds like memorized marketing blurb. Unfortunately the 4D website also seems devoid of any actual information and is filled with words like "comprehensive", "solution", "platform" and "integrated" instead. Since that thing is rather expensive and uses a custom language that I don't have much inclination to learn just for one project, I'm cautious about it and I'm wondering if anyone had any experience with it? Would you recommend it? What is it good for? What competitive advantage would I gain by learning it as a programmer, or using it as a company?

    Read the article

  • Why is the page shifting to top with a container that has overflow:hidden ?

    - by Maher4Ever
    I'm facing a problem that's really strange. It's in every browser. Everything is working correctly, until you try to go to a section using the hash ( like #contactUs in my page)... try this url : http://mahersalam.co.cc/projects/2011/#contactUs You will see that the page SHIFTS 10px to the top. if you take the hash, it works again. I have a wrapper on the page (#container) that has overflow:hidden, I did it to make sure no scroll bars appear if the resolution change. If you remove the overflow property it works too. I guess the shifting happens through the place of the scroll bar, but because it's hidden it's place only stays. So does anyone knows how to fix this problem ?

    Read the article

  • Vew not updating after scope update

    - by bsparacino
    Here is a very simple example of what I am trying to do Athlete.save(athlete,function(result) { $scope.athlete = result.athlete; }); The issue is that the $scope.athlete variable does not update in my view. In order to get it to update I have to do this Athlete.save(athlete,function(result) { $scope.athlete.fname = result.athlete.fname; $scope.athlete.lname= result.athlete.lname; $scope.athlete.gender= result.athlete.gender; .... }); This gets annoying very quickly. With the first block of code it's as if angular does not know that my $scope.athlete variable has been updated. This function is triggered from an ng-click, not some jquery call, I am doing it the angular way as far as I know. here is a simpler case I made: http://plnkr.co/edit/lMCPbzqDOoa5K4GXGhCp

    Read the article

  • Is checkdnsrr() function good enough to establish domain (in)availability?

    - by Stipe
    I want to create simple script to check domain availability. Can anybody tell me is this function enough to check domain availability before user can register: <?php $recordexists = checkdnsrr("www.google.com", "ANY"); if ($recordexists) echo "The domain name has been taken. Sorry!"; else echo "The domain name is available!"; ?> or should I go with some other whois script like http://www.mrscripts.co.uk/index.php?op=lite

    Read the article

  • Why i disconnect every few seconds? using USB wireless adapter

    - by Rev3rse
    i know it's for ubuntu questions..but mint and ubuntu are very similiar and i had the same problem with linux ubuntu too..so i think this is the right place for my question anyway i don't have experience with drivers and other things,after installing Linux on my machine( i did dist-upgrade btw) everything seem to be great because i didn't have to install any driver, after a while i realized that my connection stop after few minutes(actually it shows that I'm connected but it's not) so i have to reconnect and after few minutes it disconnect again. I'm using Alfa USB wireless adapter AWS036H, and my Linux version is 11 i think the driver i'm using is Realtek i searched in the Internet and i found nothing. these are some outputs of few things people usually ask for: Note: I'm NOT using a laptop. dmsg: [19445.604448] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.174.220.77 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=104 ID=10466 DF PROTO=TCP SPT=55150 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19448.164050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=41982 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=7566 DF PROTO=TCP INCOMPLETE [8 bytes] ] [19465.079565] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.128.216.31 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=5100 DF PROTO=TCP SPT=50169 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19486.270328] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.130.13.122 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=22207 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19497.480522] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [19497.593276] cfg80211: All devices are disconnected, going to restore regulatory settings [19497.593282] cfg80211: Restoring regulatory settings [19497.593346] cfg80211: Calling CRDA to update world regulatory domain [19497.638740] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [19497.638745] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638749] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [19497.638753] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638756] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [19497.638760] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638763] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [19497.638766] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638770] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [19497.638773] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638776] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [19497.638780] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638783] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [19497.638787] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638790] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [19497.638794] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638797] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [19497.638801] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638804] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [19497.638807] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638811] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [19497.638814] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638817] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [19497.638821] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638824] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [19497.638828] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638831] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [19497.638835] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [19497.638838] cfg80211: World regulatory domain updated: [19497.638841] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [19497.638845] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638848] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638852] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [19497.638855] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19497.638859] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [19513.145150] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [19513.146910] wlan0: authenticated [19513.252775] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [19513.255149] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [19513.255154] wlan0: associated [19515.675091] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=42720 DF PROTO=TCP SPT=1945 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19525.684312] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=49890 DF PROTO=TCP SPT=53401 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19551.856766] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=85.228.39.93 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=103 ID=1162 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19564.623005] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=90.202.21.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=17881 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19584.855364] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.49.151.87 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=31716 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19604.688647] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.225.124.155 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=6656 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19626.362529] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.184.50.41 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=23241 DF PROTO=TCP SPT=1416 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19645.040906] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=92.250.245.244 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=50061 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19665.212659] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.183.3.18 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=111 ID=1689 DF PROTO=TCP SPT=62817 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19685.036415] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=50638 DF PROTO=TCP SPT=49624 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19705.487915] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.122.17.82 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=112 ID=19070 DF PROTO=TCP SPT=54795 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19726.779185] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=80.88.116.239 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=109 ID=32168 DF PROTO=TCP SPT=57330 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19744.755673] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.124.5.43 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=2288 DF PROTO=TCP SPT=6475 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [19764.449183] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.216.35.19 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4281 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19784.456189] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.82.25.149 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=1866 DF PROTO=TCP SPT=59507 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19804.836687] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.56.199.3 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=14749 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19824.812685] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=186.28.7.159 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=44686 PROTO=UDP SPT=23418 DPT=6881 LEN=28 [19847.683314] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.13.80.169 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=63046 DF PROTO=TCP SPT=52192 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [19884.711455] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=27914 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19884.983589] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=2.107.130.61 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=7742 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19905.681078] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=95.21.11.121 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=114 ID=31775 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19926.035707] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=109.76.132.55 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=28140 DF PROTO=TCP SPT=51905 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19945.668326] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=188.92.0.197 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=7865 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [19967.200339] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.102.172 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=105 ID=28408 DF PROTO=TCP SPT=63505 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [19999.752732] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.166.171.200 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=36405 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20007.928719] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=79.235.59.16 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=46415 DF PROTO=TCP SPT=4537 DPT=6881 WINDOW=16384 RES=0x00 SYN URGP=0 [20026.181726] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.182.169.36 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=106 ID=25126 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20048.845358] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.66.118.104 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=111 ID=18068 DF PROTO=TCP SPT=49928 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20064.341857] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=77.2.63.153 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=7242 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20090.093490] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=93.16.17.210 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=108 ID=894 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20104.443995] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=89.83.235.99 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=114 ID=17295 DF PROTO=TCP SPT=58979 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20128.625374] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=81.62.91.79 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=21793 DF PROTO=TCP SPT=51446 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20151.055506] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.135.217.213 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=112 ID=32452 DF PROTO=TCP SPT=55136 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20164.618874] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.79.8.40 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x20 TTL=110 ID=47784 DF PROTO=TCP SPT=2422 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20184.337745] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=83.252.212.71 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=107 ID=14544 PROTO=UDP SPT=6881 DPT=6881 LEN=28 [20205.007512] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=91.62.158.247 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=110 ID=21562 DF PROTO=TCP SPT=3933 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20225.204018] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=84.146.24.238 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=113 ID=15045 DF PROTO=TCP SPT=49630 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20244.842290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=82.82.190.168 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=112 ID=23741 DF PROTO=TCP SPT=50766 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20266.701649] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=88.153.108.124 DST=192.168.1.6 LEN=48 TOS=0x02 PREC=0x00 TTL=111 ID=206 DF PROTO=TCP SPT=2451 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20286.305414] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=78.240.86.73 DST=192.168.1.6 LEN=52 TOS=0x00 PREC=0x00 TTL=107 ID=325 DF PROTO=TCP SPT=65184 DPT=6881 WINDOW=8192 RES=0x00 SYN URGP=0 [20294.293989] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43133 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56899 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297015] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43134 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12080 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20294.297242] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43135 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25195 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20295.478338] wlan0: deauthenticating from 00:24:c8:4b:46:e0 by local choice (reason=3) [20295.552735] cfg80211: All devices are disconnected, going to restore regulatory settings [20295.552742] cfg80211: Restoring regulatory settings [20295.552748] cfg80211: Calling CRDA to update world regulatory domain [20295.680635] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: [20295.680641] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680644] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: [20295.680648] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680652] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: [20295.680655] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680658] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: [20295.680662] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680665] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: [20295.680669] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680672] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: [20295.680676] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680679] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: [20295.680683] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680687] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: [20295.680690] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680693] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: [20295.680697] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680700] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: [20295.680704] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680708] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: [20295.680711] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680715] cfg80211: Updating information on frequency 2467 MHz for a 20 MHz width channel with regulatory rule: [20295.680718] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680722] cfg80211: Updating information on frequency 2472 MHz for a 20 MHz width channel with regulatory rule: [20295.680725] cfg80211: 2457000 KHz - 2482000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680728] cfg80211: Updating information on frequency 2484 MHz for a 20 MHz width channel with regulatory rule: [20295.680732] cfg80211: 2474000 KHz - 2494000 KHz @ KHz), (300 mBi, 2000 mBm) [20295.680736] cfg80211: World regulatory domain updated: [20295.680738] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [20295.680742] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680745] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680749] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [20295.680752] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20295.680756] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [20306.009341] wlan0: authenticate with 00:24:c8:4b:46:e0 (try 1) [20306.011225] wlan0: authenticated [20306.118095] wlan0: associate with 00:24:c8:4b:46:e0 (try 1) [20306.120963] wlan0: RX AssocResp from 00:24:c8:4b:46:e0 (capab=0x411 status=0 aid=2) [20306.120967] wlan0: associated [20307.364427] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=87.91.101.130 DST=192.168.1.6 LEN=64 TOS=0x00 PREC=0x00 TTL=49 ID=36839 DF PROTO=TCP SPT=62492 DPT=6881 WINDOW=65535 RES=0x00 SYN URGP=0 [20310.914290] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43180 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56900 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.936634] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43181 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.40 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=12081 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20310.939017] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43182 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=25196 DF PROTO=TCP INCOMPLETE [8 bytes] ] [20325.941050] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=217.118.78.99 DST=192.168.1.6 LEN=48 TOS=0x00 PREC=0x00 TTL=113 ID=4407 PROTO=UDP SPT=2970 DPT=6881 LEN=28 [20328.801724] [UFW BLOCK] IN=wlan0 OUT= MAC=00:c0:ca:44:62:d1:00:24:c8:4b:46:e0:08:00 SRC=192.168.1.254 DST=192.168.1.6 LEN=56 TOS=0x00 PREC=0x00 TTL=255 ID=43196 PROTO=ICMP TYPE=3 CODE=0 [SRC=192.168.1.6 DST=91.189.88.33 LEN=52 TOS=0x00 PREC=0x00 TTL=63 ID=56901 DF PROTO=TCP INCOMPLETE [8 bytes] ] ... inxi -N Network: Card-1 Realtek RTL8101E/RTL8102E PCI Express Fast Ethernet controller driver r8169 Card-2 Realtek RTL-8139/8139C/8139C+ driver 8139too /usr/lib/linuxmint/mintWifi/mintWifi.py ------------------------- * I. scanning WIFI PCI devices... ------------------------- * II. querying ndiswrapper... ------------------------- * III. querying iwconfig... lo no wireless extensions. eth0 no wireless extensions. eth1 no wireless extensions. wlan0 IEEE 802.11bg ESSID:"Home" Mode:Managed Frequency:2.437 GHz Access Point: 00:24:C8:4B:46:E0 Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=68/70 Signal level=-42 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1132 Missed beacon:0 ------------------------- * IV. querying ifconfig... eth0 Link encap:Ethernet HWaddr 00:1f:d0:c9:b8:8e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x4000 eth1 Link encap:Ethernet HWaddr 00:0e:2e:77:88:16 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0xd000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10696 errors:0 dropped:0 overruns:0 frame:0 TX packets:10696 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3823011 (3.8 MB) TX bytes:3823011 (3.8 MB) wlan0 Link encap:Ethernet HWaddr 00:c0:ca:44:62:d1 inet addr:192.168.1.6 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::2c0:caff:fe44:62d1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:90424 errors:0 dropped:0 overruns:0 frame:0 TX packets:65201 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:98024465 (98.0 MB) TX bytes:10345450 (10.3 MB) ------------------------- * V. querying DHCP... lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) 00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce 9400 GT] (rev a1) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) lsmod Module Size Used by ipt_REJECT 12512 1 ipt_LOG 12784 5 xt_limit 12541 7 xt_tcpudp 12531 8 ipt_addrtype 12535 4 xt_state 12514 7 ip6table_filter 12711 1 ip6_tables 22545 1 ip6table_filter nf_nat_irc 12542 0 nf_conntrack_irc 13138 1 nf_nat_irc nf_nat_ftp 12548 0 nf_nat 24827 2 nf_nat_irc,nf_nat_ftp nf_conntrack_ipv4 19024 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13106 1 nf_nat_ftp nf_conntrack 69744 7 xt_state,nf_nat_irc,nf_conntrack_irc,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18125 1 iptable_filter x_tables 21907 10 ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,ipt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables nls_utf8 12493 10 udf 83795 1 crc_itu_t 12627 1 udf usb_storage 43946 1 uas 17676 0 snd_seq_dummy 12686 0 cryptd 19801 0 aes_i586 16956 1 aes_generic 38023 1 aes_i586 binfmt_misc 13213 1 dm_crypt 22463 0 vesafb 13449 1 nvidia 9766978 44 arc4 12473 2 rtl8187 56206 0 mac80211 257001 1 rtl8187 cfg80211 156212 2 rtl8187,mac80211 ppdev 12849 0 snd_hda_codec_realtek 255882 1 parport_pc 32111 1 psmouse 73312 0 eeprom_93cx6 12653 1 rtl8187 snd_hda_intel 24113 5 snd_hda_codec 90901 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 3 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 3 snd_seq_dummy,snd_seq_midi,snd_seq_midi_event snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 4 snd_seq_dummy,snd_seq_midi,snd_rawmidi,snd_seq joydev 17322 0 snd 55295 18 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 soundcore 12600 1 snd snd_page_alloc 14073 2 snd_hda_intel,snd_pcm lp 13349 0 parport 36746 3 ppdev,parport_pc,lp usbhid 41704 0 hid 77084 1 usbhid dm_raid45 88410 0 xor 21860 1 dm_raid45 btrfs 527388 0 zlib_deflate 26594 1 btrfs libcrc32c 12543 1 btrfs 8139too 23208 0 8139cp 22497 0 r8169 42534 0 floppy 60032 0

    Read the article

  • Oracle Insurance Unveils Next Generation of Enterprise Document Automation: Oracle Documaker Enterprise Edition

    - by helen.pitts(at)oracle.com
    Oracle today announced the introduction of Oracle Documaker Enterprise Edition, the next generation of the company's market-leading Enterprise Document Automation (EDA) solution for dynamically creating, managing and delivering adaptive enterprise communications across multiple channels. "Insurers and other organizations need enterprise document automation that puts the power to manage the complete document lifecycle in the hands of the business user," said Srini Venkatasanthanam, vice president, Product Strategy, Oracle Insurancein the press release. "Built with features such as rules-based configurability and interactive processing, Oracle Documaker Enterprise Edition makes possible an adaptive approach to enterprise document automation - documents when, where and in the form they're needed." Key enhancements in Oracle Documaker Enterprise Edition include: Documaker Interactive, the newly renamed and redesigned Web-based iDocumaker module. Documaker Interactive enables users to quickly and interactively create and assemble compliant communications such as policy and claims correspondence directly from their desktops. Users benefits from built-in accelerators and rules-based configurability, pre-configured content as well as embedded workflow leveraging Oracle BPEL Process Manager. Documaker Documaker Factory, which helps enterprises reduce cost and improve operational efficiency through better management of their enterprise publishing operations. Dashboards, analytics, reporting and an administrative console provide insurers with greater insight and centralized control over document production allowing them to better adapt their resources based on business demands. Other enhancements include: enhanced business user empowerment; additional multi-language localization capabilities; and benefits from the use of powerful Oracle technologies such as the Oracle Application Development Framework for all interfaces and Oracle Universal Content Management (Oracle UCM) for enterprise content management. Drive Competitive Advantage and Growth: Deb Smallwood, founder of SMA Strategy Meets Action, a leading industry insurance analyst consulting firm and co-author of 3CM in Insurance: Customer Communications and Content Management published last month, noted in the press release that "maximum value can be gained from investments when Enterprise Document Automation (EDA) is viewed holistically and all forms of communication and all types of information are integrated across the entire enterprise. "Insurers that choose an approach that takes all communications, both structured and unstructured data, coming into the company from a wide range of channels, and then create seamless flows of information will have a real competitive advantage," Smallwood said. "This capability will soon become essential for selling, servicing, and ultimately driving growth through new business and retention." Learn More: Click here to watch a short flash demo that demonstrates the real business value offered by Oracle Documaker Enterprise Edition. You can also see how an insurance company can use Oracle Documaker Enterprise Edition to dynamically create, manage and publish adaptive enterprise content throughout the insurance business lifecycle for delivery across multiple channels by visiting Alamere Insurance, a fictional model insurance company created by Oracle to showcase how Oracle applications can be leveraged within the insurance enterprise. Meet Our Newest Oracle Insurance Blogger: I'm pleased to introduce our newest Oracle Insurance blogger, Susanne Hale. Susanne, who manages product marketing for Oracle Insurance EDA solutions, will be sharing insights about this topic along with examples of how our customers are transforming their enterprise communications using Oracle Documaker Enterprise Edition in future Oracle Insurance blog entries. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • Developer’s Life – Every Developer is a Captain America

    - by Pinal Dave
    Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues. When Captain America was first written, there was no such thing as a developer, programmer or a computer (the way we think of them, anyway).  Despite these limitations, I think there are still a lot of ways that modern Captain America is like modern developers. So how are developers like Captain America? Well, read on my list of reasons. Take on Big Projects Captain America isn’t afraid to take on big projects – and takes responsibility when the project is co-opted by the evil organization HYDRA.  Developers may not have super villains out there corrupting their work, but they know to keep on top of their projects and own what they do. Elderly Wisdom Steve Rogers, Captain America’s alter ego, was frozen in ice for decades, and brought back to life to solve problems. Developers can learn from this by respecting the opinions of their elders – technology is an ever-changing market, but the old-timers still have a few tricks up their sleeves! Don’t be Afraid of Change Don’t be afraid of change.  Captain America woke up to find the world he was accustomed to is now completely different.  He might have even felt his skills were no longer necessary.  He, and developers, know that everyone has their place in a team, though.  If you try your best, you will make it work. Fight Your Own Battle Sometimes you have to make it on your own.  Captain America is an integral part of the Avengers, but in his own movies, the other superheroes aren’t around to back him up.  Developers, too, must learn to work both within and with out a team. Solid Integrity One of Captain America’s greatest qualities is his integrity.  His determine to do what is right, keep his word, and act honestly earns him mockery from some of the less-savory characters – even “good guys” like Iron Man.  Developers, and everyone else, need to develop the strength of character to keep their integrity.  No matter your walk of life, there will be tempting obstacles.  Think of Captain America, and say “no.” There is a lot for all of us to learn from Captain America, to take away in our own lives, and admire in those who display it – I am specifically thinking of developers.  If you are enjoying this series as much as I am, please let me know who else you would like to see featured. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Post-Purchase Social Media

    - by David Dorf
    When you make a particularly good purchase, the natural tendency is to share the experience with friends. You show them your cool new toy or garment, then explain how you discovered such a great deal, all the while implying you are the world's most savvy shopper. My wife does it with clothes, housewares, and books, and I do it with wiz-bang techie stuff. Post-purchase euphoria or Buyer's remorse are associated with most purchases beyond day-to-day needs. So now let's add social media to the mix. Haul videos are a YouTube phenomenon where a shopper describes their latest haul on video. Blair Fowler, aka juicystar07, is an excellent example. She and her older sister's haul videos have been viewed 75,000,000 times, at times causing particular items to sell out after being showcased. If you're not already on this bandwagon, checkout Blair's haul video from her trip to Forever21. There are a couple good articles on this trend from ABC's GMA, Slate, and NPR. Some retailers are already sending free products to these fashionistas in the hopes they'll be reviewed on camera. For those less willing to exert themselves, there's Blippy, a service that automatically tweets your purchases. Similar to Twitter, your purchases are tweeted so your friends can see what you've purchased and your network can make comments. In the example to the right, co-founder Philip Kaplan purchased a gift for his wife from the store Does Your Mother Know, proving the point that the need for privacy is overblown. Blippy has partnerships with selected merchants like Apple, Amazon, and Netflix and can also get purchases from the credit cards you've registered. When you register, you can configure whether to automatically tweet each purchase, or approve them first. No sense in broadcasting my need for Rogaine, right? This is a good thing for retailers, as it helps spread the word about purchases and gives other people ideas. Rick just bought an ooma from Amazon. What the heck is ooma? Oh, its like Vonage but no monthly bills. I'm there.

    Read the article

  • Oracle Service Registry 11gR1 Support for Oracle Fusion Middleware/SOA Suite 11g PatchSet 2

    - by Dave Berry
    As you might be aware, a few days back we released Patchset 2 (PS2) for several products in the Oracle Fusion Middleware 11g Release 1 stack including WebLogic Server and SOA Suite. Though there was no patchset released for Oracle Service Registry (OSR) 11g, being an integral part of Fusion Middleware & SOA, OSR 11g R1 ( 11.1.1.2 ) is fully certified with this release. Below is some recommended reading before installing OSR 11g with the new PS2 : OSR 11g R1 & SOA Suite 11g PS2 in a Shared WebLogic Domain If you intend to deploy OSR 11g in the same domain as the SOA Suite 11g, the primary recommendation is to install OSR 11g in its own Managed Server within the same Weblogic Domain as the SOA Suite, as the following diagram depicts : An important pre-requisite for this setup is to apply Patch 9499508, after installation. It basically replaces a registry library - wasp.jar - in the registry application deployed on your server, so as to enable co-deployment of OSR 11g & SOA Suite 11g in the same WLS Domain. The patch fixes a java.lang.LinkageError: loader constraint violation that appears in your OSR system log and is now available for download. The second, equally important, pre-requisite is to modify the setDomainEnv.sh/.cmd file for your WebLogic Domain to conditionally set the CLASSPATH so that the oracle.soa.fabric.jar library is not included in it for the Managed Server(s) hosting OSR 11g. Both these pre-requisites and other OSR 11g Topology Best Practices are covered in detail in the new Knowledge Base article Oracle Service Registry 11g Topology : Best Practices. Architecting an OSR 11g High Availability Setup Typically you would want to create a High Availability (HA) OSR 11g setup, especially on your production system. The following illustrates the recommended topology. The article, Hands-on Guide to Creating an Oracle Service Registry 11g High-Availability Setup on Oracle WebLogic Server 11g on OTN provides step-by-step instructions for creating such an active-active HA setup of multiple OSR 11g nodes with a Load Balancer in an Oracle WebLogic Server cluster environment. Additional Info The OSR Home Page on OTN is the hub for OSR and is regularly updated with latest information, articles, white papers etc. For further reading, this FAQ answers some common questions on OSR. The OSR Certification Matrix lists the Application Servers, Databases, Artifact Storage Tools, Web Browsers, IDEs, etc... that OSR 11g is certified against. If you hit any problems during OSR 11g installation, design time or runtime, the first place to look into is the logs. To find more details about which logs to check when & where, take a look at Where to find Oracle Service Registry Logs? Finally, if you have any questions or problems, there are various ways to reach us - on the SOA Governance forum on OTN, on the Community Forums or by contacting Oracle Support. Yogesh Sontakke and Dave Berry

    Read the article

  • Open Data, Government and Transparency

    - by Tori Wieldt
    A new track at TDC (The Developer's Conference in Sao Paulo, Brazil) is titled Open Data. It deals with open data, government and transparency. Saturday will be a "transparency hacker day" where developers are invited to create applications using open data from the Brazilian government.  Alexandre Gomes, co-lead of the track, says "I want to inspire developers to become "Civic hackers:" developers who create apps to make society better." It is a chance for developers to do well and do good. There are many opportunities for developers, including monitoring government expenditures and getting citizens involved via social networks. The open data movement is growing worldwide. One initiative, the Open Government Partnership, is working to make government data easier to find and access. Making this data easily available means that with the right applications, it will be easier for people to make decisions and suggestions about government policies based on detailed information. Last April, the Open Government Partnership held its annual meeting in Brasilia, the capitol of Brazil. It was a great success showcasing the innovative work being done in open data by governments, civil societies and individuals around the world. For example, Bulgaria now publishes daily data on budget spending for all public institutions. Alexandre Gomes Explains Open Data At TDC, the Open Data track will include a presentation of examples of successful open data projects, an introduction to the semantic web, how to handle big data sets, techniques of data visualization, and how to design APIs.The other track lead is Christian Moryah Miranda, a systems analyst for the Brazilian Government's Ministry of Planning. "The Brazilian government wholeheartedly supports this effort. In order to make our data available to the public, it forces us to be more consistent with our data across ministries, and that's a good step forward for us," he said. He explained the government knows they cannot achieve everything they would like without help from the public. "It is not the government versus the people, rather citizens are partners with the government, and together we can achieve great things!" Miranda exclaimed. Saturday at TDC will be a "transparency hacker day" where developers will be invited to create applications using open data from the Brazilian government. Attendees are invited to pitch their ideas, work in small groups, and present their project at the end of the conference. "For example," Gomes said, "the Brazilian government just released the salaries of all government employees and I can't wait to see what developers can do with that." Resources Open Government Partnership  U.S. Government Open Data ProjectBrazilian Government Open Data ProjectU.K. Government Open Data Project 2012 International Open Government Data Conference 

    Read the article

  • SQLAuthority News – Virtual Launch Event for Office 2010 – Contest – Win MS Office License

    - by pinaldave
    Office products are integral products of any PC. I accept that without Office Suites, I can not survive or make enough leaving. I am blogger and use word to create my blogs. I am SQL Server Trainer  and I use PowerPoint as my presentation tool. I am SQL Server consultant and I use Excel to keep my work log. I can not see my life with Office Tools. Just like any other Microsoft Product there is strong community following Office Tools. Please count me in. The same community is hosting a Virtual Launch Event for Office 2010 on May 25 and 26th. The webcasts is FREE to attend and people can take part either online or by going to the nearest available center. The sessions will be delivered by MVPs. To register please visit: http://www.meraoffice.com. In June, limited cities will be hosting Community Launch Events for Office 2010. At the launch events, attendees will get to see Office 2010 in action and learn how to do their work better with Office 2010.  The details are available on http://office.merawindows.com. To support one of the largest community, I am announcing one contents. It is very easy to take part in the contest. You just have to answer one very simple question. Contest: Choose best option: With which Microsoft Office Product Powerpivot is associated? Options: 1) PowerPoint 2) Excel 3) Word Hint: http://search.sqlauthority.com Rules: Winner will be awarded 1 Office 2007 Home and Student. This will be freely upgradeable to Office 2010 once it releases in June. The winners will be sent emails and they will redeem their awards via microsoftstore.co.in The prizes can only be shipped to India and Indian residents are eligible. Winner will be selected by selected community leaders and MVPs at their sole discretion. Winner will be informed by email about the award. Most creative and informative comment will win the contest. Please spread the words about this contest. SQLAuthority.com will also send SQL Server book to the person who generates the most traffic to this blog post using Twitter, Facebook and other social media. This competition is also open to Indian residents only. I will measure the traffic using my wordpress.com stats plugin. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Office

    Read the article

  • Innovation, Adaptability and Agility Emerge As Common Themes at ACORD LOMA Insurance Forum

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance is blogging from the show floor of the ACORD LOMA Insurance Forum this week. Sessions at the ACORD LOMA Insurance Forum this week highlighted the need for insurance companies to think creatively and be innovative with their technology in order to adapt to continuously shifting market dynamics and drive business efficiency and agility.  LOMA President & CEO Robert Kerzner kicked off the day on Tuesday, citing how the recent downtown and recovery has impacted the insurance industry and the ways that companies are doing business.  He encouraged carriers to look for new ways to deliver solutions and offer a better service experience for consumers.  ACORD President & CEO Gregory Maciag reinforced Kerzner's remarks, noting how the industry's approach to technology and development of industry standards has evolved over the association's 40-year history and cited how the continued rise of mobile computing will change the way many carriers are doing business today and in the future. Drawing from his own experiences, popular keynote speaker and Apple Co-Founder Steve Wozniak continued this theme, delving into ways that insurers can unite business with technology.  "iWoz" encouraged insurers to foster an entrepreneurial mindset in a corporate environment to create a culture of creativity and innovation.  He noted that true innovation in business comes from those who have a passion for what they do.  Innovation was also a common theme in several sessions throughout the day with topics ranging from modernization of core systems, automated underwriting, distribution management, CRM and customer communications management.  It was evident that insurers have begun to move past the "old school" processes and systems that constrain agility, implementing new process models and modern technology to become nimble and more adaptive to the market.   Oracle Insurance executives shared a few examples of how insurers are achieving innovation during our Platinum Sponsor session, "Adaptive System Transformation:  Making Agility More Than a Buzzword." Oracle Insurance Senior Vice President and General Manager Don Russo was joined by Chuck Johnston, vice president, global strategy and alliances, and Srini Venkatasantham, vice president of product strategy.  The three shared how Oracle's adaptive solutions for insurance, with a focus on how the key pillars of an adaptive systems - configurable applications, accessible information, extensible content and flexible process - have helped insurers respond rapidly, perform effectively and win more business. Insurers looking to innovate their business with adaptive insurance solutions including policy administration, business intelligence, enterprise document automation, rating and underwriting, claims, CRM and more stopped by the Oracle Insurance booth on the exhibit floor.  It was a premiere destination for many participating in the exhibit hall tours conducted throughout the day. Finally, red was definitely the color of the evening at the Oracle Insurance "Red Hot" customer celebration at the House of Blues. The event provided a great opportunity for our customers to come together and network with the Oracle Insurance team and their peers in the industry.  We look forward to visiting more with of our customers and making new connections today. Helen Pitts is senior product marketing manager for Oracle Insurance. 

    Read the article

  • Pro ASP.NET MVC Framework Review

    - by Ben Griswold
    Early in my career, when I wanted to learn a new technology, I’d sit in the bookstore aisle and I’d work my way through each of the available books on the given subject.  Put in enough time in a bookstore and you can learn just about anything. I used to really enjoy my time in the bookstore – but times have certainly changed.  Whereas books used to be the only place I could find solutions to my problems, now they may be the very last place I look.  I have been working with the ASP.NET MVC Framework for more than a year.  I have a few projects and a couple of major deployments under my belt and I was able to get up to speed with the framework without reading a single book*.  With so many resources at our fingertips (podcasts, screencasts, blogs, stackoverflow, open source projects, www.asp.net, you name it) why bother with a book? Well, I flipped through Steven Sanderson’s Pro ASP.NET MVC Framework a few months ago. And since it is prominently displayed in my co-worker’s office, I tend to pick it up as a reference from time to time.  Last week, I’m not sure why, I decided to read it cover to cover.  Man, did I eat this book up.  Granted, a lot of what I read was review, but it was only review because I had already learned lessons by piecing the puzzle together for myself via various sources. If I were starting with ASP.NET MVC (or ASP.NET Web Deployment in general) today, the first thing I would do is buy Steven Sanderson’s Pro ASP.NET MVC Framework and read it cover to cover. Steven Sanderson did such a great job with this book! As much as I appreciated the in-depth model, view, and controller talk, I was completely impressed with all the extra bits which were included.  There a was nice overview of BDD, view engine comparisons, a chapter dedicated to security and vulnerabilities, IoC, TDD and Mocking (of course), IIS deployment options and a nice overview of what the .NET platform and C# offers.  Heck, Sanderson even include bits about webforms! The book is fantastic and I highly recommend it – even if you think you’ve already got your head around ASP.NET MVC.  By the way, procrastinators may be in luck.  ASP.NET MVC V2 Framework can be pre-ordered.  You might want to jump right into the second edition and find out what Sanderson has to say about MVC 2. * Actually, I did read through the free bits of Professional ASP.NET MVC 1.0.  But it was just a chapter – albeit a really long chapter.

    Read the article

  • Why it may be good to be confused: Mary Lo Verde’s Motivational Discussion at Oracle

    - by user769227
    Why it may be good to be confused: Mary Lo Verde’s Motivational Discussion at Oracle by Olivia O'Connell Last week, we were treated to a call with Mary LoVerde, a renowned Life-Balance and Motivational Speaker. This was one of many events organized by Oracle Women’s Leadership (OWL). Mary made some major changes to her life when she decided to free herself of material positions and take each day as it came. Her life balance strategies have led her from working with NASA to appearing on Oprah. Mary’s MO is “cold turkey is better than dead duck!”, in other words, knowing when to quit. It is a surprising concept that flies in the face of the “winners don’t quit” notion and focuses on how we limit our capabilities and satisfaction levels by doing something that we don’t feel passionately about. Her arguments about quitting were based on the conception that ‘“it” is in the way of you getting what you really want’ and that ‘quitting makes things easier in the long run’. Of course, it is often difficult to quit, and though we know that things would be better if we did quit certain negative things in our lives, we are often ashamed to do so. A second topic centred on the perception of Confusion Endurance. Confusion Endurance is based around the idea that it is often good to not know exactly what you are doing and that it is okay to admit you don’t know something when others ask you; essentially, that humility can be a good thing. This concept was supposed to have to Leonardo Da Vinci, because he apparently found liberation in not knowing. Mary says, this allows us to “thrive in the tension of not knowing to unleash our creative potential” An anecdote about an interviewee at NASA was used to portray how admitting you don’t know can be a positive thing. When NASA asked the candidate a question with no obvious answer and he replied “I don’t know”, the candidate thought he had failed the interview; actually, the interviewers were impressed with his ability to admit he did not know. If the interviewee had guessed the answer in a real-life situation, it could have cost the lives of fellow astronauts. The highlight of the webinar for me? Mary told how she had a conversation with Capt. Chesley B. "Sully" Sullenberger who recalled the US Airways Flight 1549 / Miracle on the Hudson incident. After making its descent and finally coming to rest in the Hudson after falling 3,060 feet in 90 seconds, Sully and his co-pilot both turned to each other and said “well...that wasn’t as bad as we thought”. Confusion Endurance at its finest! Her discussion certainly gave food for thought, although personally, I was inclined to take some of it with a pinch of salt. Mary Lo Verde is the author of The Invitation, and you can visit her website and view her other publications at www.maryloverde.com. For details on the Professional Business Women of California visit: http://www.pbwc.org/

    Read the article

  • So it comes to PASS…

    - by Tony Davis
    How does your company gauge the benefit of attending a technical conference? What's the best change you made as a direct result of attendance? It's time again for the PASS Summit and I, like most people go with a set of general goals for enhancing technical knowledge; to learn more about PowerShell, to drill into SQL Server performance tuning techniques, and so on. Most will write up a brief report on the event for the rest of the team. Ideally, however, it will go a bit further than that; each conference should result in a specific improvement to one of your systems, or in the way you do your job. As co-editor of Simple-talk.com, and responsible for the majority of our SQL books, my “high level” goals don't vary much from conference to conference. I'm always on the lookout for good new authors. I target interesting new technologies and tools and try to learn more. I return with a list of actions, new articles to commission, and potential new authors. Three years ago, however, I started setting myself the goal of implementing “one new thing” after each conference. After one, I adopted Kanban for managing my workload, a technique that places strict limits on “work in progress” and makes the overall workload, and backlog, highly visible. After another I trialled a community book project. At PASS 2010, one of my general goals was to delve deeper into SQL Server transaction log mechanics, but on top of that, I set a specific goal of writing something useful on the topic. I started a Stairway series and, ultimately, it's turned into a book! If you're attending the PASS Summit this year, take some time to consider what specific improvement or change you'll implement as a result. Also, try to drop by the Red Gate booth (#101). During the Vendor event on Wednesday evening, Gail Shaw and I will be there to discuss, and hand out copies of the book. Cheers, Tony.  

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • How to install Huawei Mobile broadband EC306?

    - by serviteur
    How to install Huawei Mobile Broadband EC 306 EVDO RevB in Ubuntu 12.04 LTS 64bit ? Best Regards Excuses me for my bad english When I connect the modem on ubuntu, it fails to mount system and furthermore it is not recognized as a CD-ROM. I is not installed Windows on my computer, but I try to open the modem under Windows on a PC friend, There is no script file called "Linux", but only Windows. lsusb : serviteur@creation:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 15d9:0a4c Trust International B.V. USB+PS/2 Optical Mouse Bus 001 Device 007: ID 12d1:1506 Huawei Technologies Co., Ltd. E398 LTE/UMTS/GSM Modem/Networkcard dmesg Q: 0 ANSI: 2 [16619.060771] sr1: scsi-1 drive [16619.060955] sr 13:0:0:0: Attached scsi CD-ROM sr1 [16619.061099] sr 13:0:0:0: Attached scsi generic sg3 type 5 [16619.061358] sd 14:0:0:0: Attached scsi generic sg4 type 0 [16619.063654] sd 14:0:0:0: [sdc] Attached SCSI removable disk [16634.224923] usb 1-6: USB disconnect, device number 6 [16638.468041] usb 1-6: new high-speed USB device number 7 using ehci_hcd [16638.586210] option 1-6:1.0: GSM modem (1-port) converter detected [16638.586316] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB0 [16638.586435] option 1-6:1.1: GSM modem (1-port) converter detected [16638.586517] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB1 [16638.586607] option 1-6:1.2: GSM modem (1-port) converter detected [16638.586676] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB2 [16638.586752] option 1-6:1.3: GSM modem (1-port) converter detected [16638.586828] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB3 [16638.586929] option 1-6:1.4: GSM modem (1-port) converter detected [16638.586997] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB4 [16638.587114] option 1-6:1.5: GSM modem (1-port) converter detected [16638.587187] usb 1-6: GSM modem (1-port) converter now attached to ttyUSB5 [16638.646686] option1 ttyUSB5: GSM modem (1-port) converter now disconnected from ttyUSB5 [16638.646706] option 1-6:1.5: device disconnected [16638.660755] scsi15 : usb-storage 1-6:1.5 [16638.663284] option1 ttyUSB4: GSM modem (1-port) converter now disconnected from ttyUSB4 [16638.663301] option 1-6:1.4: device disconnected [16638.689043] scsi16 : usb-storage 1-6:1.4

    Read the article

  • Defining Social Media Terms

    - by David Dorf
    As I talk about social in the context of retail, I sometimes get tripped up on different terms. I know what I mean, but the audience may have something else in mind. So I decided to see if I could find some well accepted definitions for common terms. While there are definitions on the Internet, I'm not sure they are well accepted. After reviewing several, here's what I came up with: Social Network: a structure of individuals and groups connected together by commonality. That seems pretty straightforward. A group of friends, co-workers, music fans, etc. The key here is that they have something in common that connects them. Social Media: Internet channels that support the collaborative publishing of information by and for social networks. The key here is to differentiate between traditional one-way media, and conversational social media. When its social its two-way, allowing both the publishing and consuming of information. Examples are blogs, wikis, Twitter, Facebook, etc. Social Marketing: the use of social media for marketing, public relations, and customer service. Wikipedia actually includes "selling" here but I think that's separate from marketing, as you'll see further down below. Most people look at social media as entertainment, but the marketing angle adds business value. This is where retailers discover and engage customers to build a relationship. Social Merchandising: the integration of social media and product discovery. Whereas marketing is focused more on brand image, customer engagement, and promotions, merchandising is more directly trying to convert browsers into purchases. This includes deciding what customers want, often by asking the social network, and deciding how to position products to the social network. Social Selling: the incorporation of e-commerce into social media. While on a social media site, social selling enables the purchasing of goods/services in the user's context, without leaving the social media channel. If a user clicks on an advertisement and is taken to an e-commerce site, then that's really just web advertising and not social selling. Well, do these terms and definitions make sense? Let me know what you think.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >