Search Results

Search found 3096 results on 124 pages for 'scope creep'.

Page 116/124 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • Cannot ping Localhost so I can't shutdown Tomcat

    - by gav
    Hi, I installed Tomcat 6 using the tar-ball via wget. Startup of the server is fine but on shutdown I get a timeout exception. root@88:/usr/local/tomcat/logs# /usr/local/tomcat/bin/shutdown.sh Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar 30-Mar-2010 17:33:41 org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) ... I read that this might be because I have a firewall blocking incoming connections on the shutdown port (8005). I have a default Ubuntu 9.04 installation running on a VPS with no rules in my iptables. How can I tell if that port is blocked? How can I check that the server is listening for connections on 8005? Bizarrely pinging localhost or the IP of my server fails from the server itself, whereas pinging the IP of my server from another machine succeeds. -------- EDIT -------- (In reply to Davey) Thanks for all the tips and suggestions! netstat -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 9611/java tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 28505/mysqld tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 9611/java tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN ... So we can see that tomcat is listening, I just don't seem to be able to reach it. root@88:/usr/local/tomcat# telnet localhost 8005 Trying 127.0.0.1... Trying to telnet to the port Hangs indefinitely. I have no rules in my iptables so I don't think it's a firewall thing. root@88:/usr/local/tomcat# iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is the contents of /etc/hosts 127.0.0.1 localhost.localdomain localhost # Auto-generated hostname. Please do not remove this comment. 88.198.31.14 88.198.31.14 88 88 But I still can't ping localhost... do I need to check a loopback device is enabled properly or something? (I'm unsure how to do that if you do say yes :)). root@88:/usr/local/tomcat# ping localhost PING localhost (127.0.0.1) 56(84) bytes of data. --- localhost ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 5999ms Trying to find out what the loop back is configured as; root@88:~# ifconfig lo lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) SOLUTION THANKS TO DAVEY I needed to bring up the interface (Not sure why it wasn't running). ifconfig lo up did the trick. root@88:~# ifconfig lo up root@88:~# ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@88:~# ping localhost PING localhost.localdomain (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.025 ms Thanks again, Gav

    Read the article

  • Apache mod_jk Setting for Tomcat - workers.properties

    - by sissonb
    I am trying to direct files with .jsp extensions to tomcat. Otherwise I want apache to serve the file directly (no tomcat). Currently I have a test.jsp which is supposed to create an HTML page with the current date in the body. Instead when I go to that .jsp I see the JK Status Manager. The mod_jk.logs only show, init_jk::mod_jk.c (3365): mod_jk/1.2.35 initialized. I have tomcat and apache setup on my server. Apache runs on 80 and tomcat runs on 8080. localhost:8080 show the tomcat welcome page. I downloaded tomcat-connectors-1.2.35-windows-i386-httpd-2.2.x and copied the mod_jk.so to C:\apache\modules. Then I added LoadModule jk_module modules/mod_jk.so to my httpd.conf. I restart apache and the module loads just fine. Next I downloaded the mod_jk source to get the workers.properties file. I copy workers.properties to C:\apache\confg. Then I added this user, workers.tomcat_home="C:/Program Files/Apache Software Foundation/Tomcat 7.0" workers.java_home="C:/Program Files/Java/jdk1.7.0_03" worker.list=ajp13 worker.ajp13.port=8080 worker.ajp13.host=localhost worker.ajp13.type=ajp13 worker.ajp13.socket_timeout=10 When I try to use the ajp13 user in my httpd.conf I get the following error in my mod_jk.log, [Wed Mar 28 13:08:51 2012] [2196:4100] [info] ajp_connection_tcp_get_message::jk_ajp_common.c (1258): (ajp13) can't receive the response header message from tomcat, network problems or tomcat (127.0.0.1:8080) is down (errno=60) [Wed Mar 28 13:08:51 2012] [2196:4100] [error] ajp_get_reply::jk_ajp_common.c (2117): (ajp13) Tomcat is down or refused connection. No response has been sent to the client (yet) [Wed Mar 28 13:08:51 2012] [2196:4100] [info] ajp_service::jk_ajp_common.c (2614): (ajp13) sending request to tomcat failed (recoverable), (attempt=1) Next I update my httpd.conf with, JkWorkersFile C:/apache/conf/workers.properties JkLogFile C:/apache/logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " Also I added JkMount /*.jsp jk-status to my virtual host like this, <VirtualHost 192.168.5.250:80> JkMount /*.jsp jk-status #JkMount /*.jsp ajp13 ServerName bgsisson.com ServerAlias www.bgsisson.com DocumentRoot C:/www/resume </VirtualHost> I think i need to include a uriworkermap.properties file, but this is where I am getting stuck. I have put up a test .jsp at bgsisson.com/test.jsp It shows the JK Status Manager when I use JkMount /*.jsp jk-status and 502 Bad Gateway when I use JkMount /*.jsp ajp13 test.jsp <%-- use the 'taglib' directive to make the JSTL 1.0 core tags available; use the uri "http://java.sun.com/jsp/jstl/core" for JSTL 1.1 --%> <%@ taglib uri="http://java.sun.com/jstl/core" prefix="c" %> <%-- use the 'jsp:useBean' standard action to create the Date object; the object is set as an attribute in page scope --%> <jsp:useBean id="date" class="java.util.Date" /> <html> <head><title>First JSP</title></head> <body> <h2>Here is today's date</h2> <c:out value="${date}" /> </body> </html>

    Read the article

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • register_globals error in php

    - by user145862
    I was stuck up with the error directive 'register_globals' is no longer available in PHP in unknown on line 0 when tried to check the php version using "php -v" after enabling register_globals in php.ini file. I am not getting any php version info by doing so. Instead it throws the above mentioned error.After turning off this option, php info works quite well. It is very essential for me to have register_globals to be turned on.How can I have this corrected. my php.ini is as follows: ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; http://php.net/request-order request_order = "GP" ; Whether or not to register the EGPCS variables as global variables. You may ; want to turn this off if you don't want to clutter your scripts' global scope ; with user data. ; You should do your best to write your scripts so that they do not require ; register_globals to be on; Using form variables as globals can easily lead ; to possible security problems, if the code is not very well thought of. ; register_globals = On ; Determines whether the deprecated long $HTTP_*_VARS type predefined variables ; are registered by PHP or not. As they are deprecated, we obviously don't ; recommend you use them. They are on by default for compatibility reasons but ; they are not recommended on production servers. ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_long_arrays = Off ; This directive determines whether PHP registers $argv & $argc each time it ; runs. $argv contains an array of all the arguments passed to PHP when a script ; is invoked. $argc contains an integer representing the number of arguments ; that were passed when the script was invoked. These arrays are extremely ; useful when running scripts from the command line. When this directive is ; enabled, registering these variables consumes CPU cycles and memory each time ; a script is executed. For performance reasons, this feature should be disabled ; on production servers. ; Note: This directive is hardcoded to On for the CLI SAPI ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_argc_argv = Off ; When enabled, the SERVER and ENV variables are created when they're first ; used (Just In Time) instead of when the script starts. If these variables ; are not used within a script, having this directive on will result in a ; performance gain. The PHP directives register_globals, register_long_arrays, ; and register_argc_argv must be disabled for this directive to have any affect. ; auto_globals_jit = On

    Read the article

  • Unable to connect to OpenVPN server

    - by Incognito
    I'm trying to get a working setup of OpenVPN on my VM and authenticate into it from a client. I'm not sure but it looks to me like it's socket related, as it's not set to LISTEN, and localhost seems wrong. I've never set up VPN before. # netstat -tulpn | grep vpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 127.0.0.1:1194 0.0.0.0:* 24059/openvpn I don't think this is set up correctly. Here's some detail into what I've done. I have a VPS from MediaTemple: These are my interfaces before starting openvpn: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39482 errors:0 dropped:0 overruns:0 frame:0 TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3237452 (3.2 MB) TX bytes:3237452 (3.2 MB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:4885284 errors:0 dropped:0 overruns:0 frame:0 TX packets:4679884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:835278537 (835.2 MB) TX bytes:1989289617 (1.9 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:205.[redacted] P-t-P:205.186.148.82 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 I've followed this guide on setting up a basic server and getting a .p12 file, however, I was receiving an error that stated /dev/net/tun was missing, so I created it mkdir -p /dev/net mknod /dev/net/tun c 10 200 chmod 600 /dev/net/tun This resolved the error preventing the service from launching, however, I am unable to connect. On the server I've set up the myserver.conf file (as per the tutorial) to indicate local 127.0.0.1 (I've also attempted with the public IP address, perhaps I don't understand what they mean by local IP?). The server launches without error, this is what the log looks like when it starts: Sun Apr 1 17:21:27 2012 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Mar 11 2011 Sun Apr 1 17:21:27 2012 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Apr 1 17:21:27 2012 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Sun Apr 1 17:21:27 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Sun Apr 1 17:21:27 2012 TUN/TAP device tun0 opened Sun Apr 1 17:21:27 2012 /sbin/ifconfig tun0 10.8.0.1 pointopoint 10.8.0.2 mtu 1500 Sun Apr 1 17:21:27 2012 GID set to openvpn Sun Apr 1 17:21:27 2012 UID set to openvpn Sun Apr 1 17:21:27 2012 UDPv4 link local (bound): [AF_INET]127.0.0.1:1194 Sun Apr 1 17:21:27 2012 UDPv4 link remote: [undef] Sun Apr 1 17:21:27 2012 Initialization Sequence Completed This creates a tun0 interface that looks like this: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the netstat command still indicates the state is not set to LISTEN. On the client-side I've installed the p12 certs onto two devices (one is an android tablet, the other is an Ubuntu desktop). I don't see port 1194 as open either. Both clients install the cert files and then ask me for the L2TP secret (which was set on the file), but then they oddly ask me for a username and a password, which I don't know where I could possibly get those from. I attempted all of my logins, and some whacky guesses that were frantically pulling at straws. If there's any more information I could provide let me know.

    Read the article

  • what is the best mid/high-end class audio/music creation audio sound card?

    - by Chris
    Hello, I have a computershop myself, and I repair computers. But one of the things I really don't know (yet) is the performace od audio cards for music creation with midi. I have searched and searched and came up with some good reviews, but after browsing for a couple of hours I could't see the trees trough the forrest :-D (it's a dutch expression) At one moment I thought the M-Audio - Delta 1010LT would be a good PCIe card, later on I read that this card was released years ago. (but that could be false information) Also any personal expierence would be great, but not necessairy. I have searched a few cards, and I hope someone can help me make a choice for a friend of mine. He's buget is between $100 and $350 I know there are audio cards from $ 500 - $1850,- this is just too expensive. The following specs are crucial: ASIO Midi Mic in minimal 5.1, 7.1 recommended it's not for airplay, but just to compose music at home. using Ableton and midi keyboard. 1. M-Audio - Delta 1010LT: 8 x 8 analog I/O 2 mic preamps or line inputs S/PDIF digital I/O (coaxial) with 2-channel PCM SCMS copy protection control digital I/O supports surround-encoded AC-3 and DTS pass-through 1 x 1 MIDI I/O directly drive up to 7.1 surround (bass management software included) software controlled 36-bit internal DSP digital mixing/routing +4dbu/-10dBV operation individually switched in software word clock I/O for sample accurate device synchronization 2. RME HDSP 9632: * Stereo Analog Ein- und Ausgang, symmetrisch*, 24-Bit/192kHz, > 110 dB SNR * Optionale Erweiterungsboards mit je 4 symmetrischen Ein- und Ausgängen * Alle analogen I/Os voll 192 kHz-fähig, also keine Reduzierung der Kanalzahl * 1 x ADAT Digital In/Out, 96 kHz-fähig (S/MUX) * 1 x SPDIF Digital In/Out, 192 kHz-fähig * 1 x Breakout Kabel für koaxialen SPDIF-Betrieb* * Also bis zu 16 Ein-und Ausgänge gleichzeitig nutzbar! * 1 x Stereo Kopfhörerausgang, parallel zum analogen Ausgang, aber eigene Pegelanpassung * 1 x MIDI I/O für 16 Kanäle Hi-Speed MIDI über Breakout Kabel * DIGICheck, RMEs einzigartiges Meter- und Analysetool mit Spectral Analyser, Professionelle Level Meter 2/8/16-Kanalig, Vector Audio Scope und diversen weiteren Analysefunktionen * HDSP Meter Bridge: Frei skalierbare Levelmeter mit Peak- und RMS Berechnung in Hardware * TotalMix: 512-Kanal Mischer mit 40 Bit interner Auflösung 3. EMU 1212M (1212 M) PCIe: * Top kwaliteit convertors 24-bit/192kHz convertors. * Hardware gestuurde effecten. * DSP zero-latency hardware mixen en monitoring. * Analoge en digitale I/O plus MIDI. * EMU Production Tools Software Bundle - Cakewalk SONAR , Steinberg Cubase LE, Ableton Live E-MU Edition **EMU 1212M PCI-e inputs/outputs:** * 2 balanced jack inputs. * 2 balanced jack outputs. * 24-bit/192kHz ADAT I/O. * 24-bit/192kHz Coaxiale S/PDif I/O switchable to AES/EBU. * MIDI I/O. 4. M-Audio Audiophile 192: - Up to 24-bit/192kHz audio - 2 balanced analog inputs (1/4” TRS) - 2 balanced analog outputs (1/4” TRS) - S/PDIF digital I/O (coaxial RCA connectors) with 2-channel PCM - SCMS copy protection control - Digital I/O supports surround-encoded AC-3 and DTS pass-through - Direct hardware input monitoring via separate balanced 1/4” TRS monitor outputs - Software routing of inputs and outputs - Digital I/O can be routed to/from external effects - 16-channel MIDI I/O - ASIO, WDM, GSIF 2 and Core Audio driver support for compatibility with most applications - 64-bit driver support for Windows - PCI 2.2 compatibility - Apple G5 compatible - Incompatible exceptions - Includes Ableton Live Lite music production software, so you can make music right away - Works with other Delta cards Technical Specifcations: - Compatibility - ASIO - WDM - GSIF 2 - Core Audio

    Read the article

  • Vim: Context sensitive code completion for PHP

    - by eddy147
    Vim gives me too much options when I use code completion. In a class, and type $class- it gives me about a zillion options, so not only from the class itself but also from php, all globals ever created, in short: a mess. I only want to have the options from the class itself (or the parent subtype class it extends from), so context or scope sensitive code completion, just like Netbeans for example. How can I do that? My current configuration is this: I am using ctags, and created 1 ctags file for our (big) application in the root. This is the .ctags file I used to create the ctags file: -R -h ".php" --exclude=.svn --languages=+PHP,-JavaScript --tag-relative=yes --regex-PHP=/abstract\s+class\s+([^ ]+)/\1/c/ --regex-PHP=/interface\s+([^ ]+)/\1/c/ --regex-PHP=/(public\s+|static\s+|protected\s+|private\s+)\$([^ \t=]+)/\2/p/ --regex-PHP=/const\s+([^ \t=]+)/\1/d/ --regex-PHP=/final\s+(public\s+|static\s+|abstract\s+|protected\s+|private\s+)function\s+\&?\s*([^ (]+)/\2/f/ --PHP-kinds=+cdf --fields=+iaS This is the .vimrc file: " autocomplete funcs and identifiers for languages autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " exuberant ctags " the magic is the ';' at end. it will make vim tags file search go up from current directory until it finds one. set tags=projectrootdir/tags; map <F8> :!ctags " TagList " :tag getUser => Jump to getUser method " :tn (or tnext) => go to next search result " :tp (or tprev) => to to previous search result " :ts (or tselect) => List the current tags " => Go back to last tag location " +Left click => Go to definition of a method " More info: " http://vimdoc.sourceforge.net/htmldoc/tagsrch.html (official documentation) " http://www.vim.org/tips/tip.php?tip_id=94 (a vim tip) let Tlist_Ctags_Cmd = "~/bin/ctags" let Tlist_WinWidth = 50 map <F4> :TlistToggle<cr> "see http://vim.wikia.com/wiki/Make_Vim_completion_popup_menu_work_just_like_in_an_IDE " will change the 'completeopt' option so that Vim's popup menu doesn't select the first completion item, but rather just inserts the longest common text of all matches :set completeopt=longest,menuone " will change the behavior of the <Enter> key when the popup menu is visible. In that case the Enter key will simply select the highlighted menu item, just as <C-Y> does :inoremap <expr> <CR> pumvisible() ? "\<C-y>" : "\<C-g>u\<CR>" " inoremap <expr> <C-n> pumvisible() ? '<C-n>' : \ '<C-n><C-r>=pumvisible() ? "\<lt>Down>" : ""<CR>' inoremap <expr> <M-,> pumvisible() ? '<C-n>' : \ '<C-x><C-o><C-n><C-p><C-r>=pumvisible() ? "\<lt>Down>" : ""<CR>'

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • Unable to make the session state request to the session state server.

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • How to validate referral support implemented for Active Dircetory server?

    - by user146560
    Please suggest me some utility or application, using which i want to test referral settings done. I want to test cross forest referenced reference. Among two DNS say 1 firstDNS.com user([email protected]) 2 SecondDNS.com user([email protected]) Below java code written to test active directory server setting. public void authenticateUser(String user, String password, String domain) throws AuthenticationException, NamingException { List<String> ldapServers = findLDAPServersInWindowsDomain("first.com"); if (ldapServers.isEmpty()) throw new NamingException("Can't locate an LDAP server (try nslookup type=SRV _ldap._tcp." + "first.com"+ ")"); Hashtable<String, String> props = new Hashtable<String, String>(); String principalName = "testUserFirst"+ "@" + "First.com"; props.put(Context.SECURITY_PRINCIPAL, principalName); props.put(Context.SECURITY_CREDENTIALS, password); props.put(Context.REFERRAL,"follow"); //props.put(Context.SECURITY_AUTHENTICATION, "anonymous"); Integer count = 0; for (String ldapServer : ldapServers) { try { count++; DirContext ctx = LdapCtxFactory.getLdapCtxInstance("ldap://" + ldapServer, props); SearchControls searchCtls = new SearchControls(); //Specify the attributes to return String returnedAtts[]={"sn","givenName","mail"}; searchCtls.setReturningAttributes(returnedAtts); //Specify the search scope searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE); //specify the LDAP search filter String searchFilter = "(&(objectClass=user)(sAMAccountName=" testUserSecond)(userPassword=usertest@3))"; //Specify the Base for the search String searchBase = "DC=second,DC=com"; //initialize counter to total the results int totalResults = 0; // Search for objects using the filter NamingEnumeration<SearchResult> answer = ctx.search(searchBase, searchFilter, searchCtls); return; } catch (CommunicationException e) { // this is what'll happen if one of the domain controllers is unreachable if (count.equals(ldapServers.size())) { // we've got no more servers to try, so throw the CommunicationException to indicate that we failed to reach an LDAP server throw e; } } } } private List<String> findLDAPServersInWindowsDomain(String domain) throws NamingException { List<String> servers = new ArrayList<String>(); Hashtable<String, String> env = new Hashtable<String, String>(); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.dns.DnsContextFactory"); env.put("java.naming.provider.url", "dns://"); DirContext ctx = new InitialDirContext(env); Attributes attributes = ctx.getAttributes("_ldap._tcp." + domain, new String[] { "SRV" }); // that's how Windows domain controllers are registered in DNS Attribute a = attributes.get("SRV"); for (int i = 0; i < a.size(); i++) { String srvRecord = a.get(i).toString(); // each SRV record is in the format "0 100 389 dc1.company.com." // priority weight port server (space separated) servers.add(srvRecord.split(" ")[3]); } ctx.close(); return servers; }

    Read the article

  • Installing EclipseFP on Mac OS X

    - by Dom Kennedy
    I am trying to install EclipseFP. I'm running OS X Mavericks. I've tried following both the official installation instructions and the advice in this answer on SU, but I'm still having the same problem. I can get the plugin itself installed painlessly using Help -> Install New Software..., Bbut when I restart and switch to the Haskell perspective, things start to go wrong. The installation instructions tells me that I should receive a prompt to install BuildWrapper and Scion Browser. I do not receive this prompt. Furthermore, if I create a new Haskell project, my code has no syntax highlighting, and the Hoogle search feature does not appear to do anything. It's clear that the plugin is not set up correctly yet. I've tried running cabal update in Terminal, but this does not change anything. After several attempts going round in circles with this on Eclipse Juno, I uninstalled Eclispe and the Haskell Platform and performed a clean install of Eclipse Luna and the latest Haskell Platform. However, the problems are persisting. I've tried going into Preferences to see if I could sort any of this out manually. I should initially point out that my GHC installation seems to be correctly references under Preferences -> Haskell Implementations Under Haskell -> Helper executables, there are areas for configuring the options of both BuildWrapper and Scion Browser. At present, both are blank. I tried clicking the Install from Hackage... button beside each of them with no success; I receive an error message saying Expected executable <workspace>/.metadata/.plugins/net.sf.eclipsefp.haskell.ui/sandbox/.cabal-sandbox/bin/buildwrapper not found!` (replace buildwrapper for scion-browser and the message is the same) The Eclipse console displays the following exception after doing the above with BuildWrapper: src/Language/Haskell/BuildWrapper/GHCStorage.hs:313:32: Not in scope: data constructor ‘MatchGroup’ cabal.real: Error: some packages failed to install: buildwrapper-0.7.4 failed during the building phase. The exception was: ExitFailure 1 and after doing it for Scion-Browser: zip-archive-0.2.3.4 (reinstall) changes: text-1.1.0.0 -> 0.11.3.1 pandoc-1.12.3.3 (latest: 1.13) -http-conduit (new version) Graphalyze-0.14.1.0 (reinstall) changes: pandoc-1.12.4.2 -> 1.12.3.3, text-1.1.0.0 -> 0.11.3.1 cabal.real: The following packages are likely to be broken by the reinstalls: pandoc-1.12.4.2 unordered-containers-0.2.4.0 aeson-0.7.0.4 scientific-0.2.0.2 case-insensitive-1.1.0.3 HTTP-4000.2.10 Use --force-reinstalls if you want to install anyway. After receiving similar results as the above on previous attempts, I've tried using force-reinstalls and ended up at more dead ends. I am at a loss as to what is wrong and how to solve this. I should point out that my GHC installation appears to be correctly configured under Preferences -> Haskell -> Haskell Implementations. Apologies if any of this information is irrelevant, I'm just not really sure what is important and what isn't at this point. Any help anyone could provide me with would be greatly appreciated.

    Read the article

  • Set source address to use tun device does not work (Debian Squeeze)

    - by A. Donda
    there have been similar questions on StackExchange but none of the answers helped me, so I'll try a question of my own. I have a VPN connection via OpenVPN. By default, all traffic is redirected through the tunnel using OpenVPN's "two more specific routes" trick, but I disabled that. My routing table is like this: 198.144.156.141 192.168.2.1 255.255.255.255 UGH 0 0 0 eth0 10.30.92.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1 10.30.92.1 10.30.92.5 255.255.255.255 UGH 0 0 0 tun1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.30.92.5 0.0.0.0 UG 0 0 0 tun1 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0 And the interface configuration is like this: # ifconfig eth0 Link encap:Ethernet HWaddr XX-XX- inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fe8d:acbd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:394869 errors:0 dropped:0 overruns:0 frame:0 TX packets:293489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:388519578 (370.5 MiB) TX bytes:148817487 (141.9 MiB) Interrupt:20 Base address:0x6f00 tun1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.30.92.6 P-t-P:10.30.92.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:9885 (9.6 KiB) TX bytes:4380 (4.2 KiB) plus the lo device. The routing table has two default routes, one via eth0 through my local network router (DSL modem) at 192.168.2.1, and another via tun1 through the VPN's gateway. With this configuration, if I connect to a site, the route chosen is the direct one (because it has less hops?): # traceroute 8.8.8.8 -n traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 192.168.2.1 0.427 ms 0.491 ms 0.610 ms 2 213.191.89.13 17.981 ms 20.137 ms 22.141 ms 3 62.109.108.48 23.681 ms 25.009 ms 26.401 ms ... This is fine, because my goal is to send only traffic from specific applications through the tunnel (esp. transmission, using its -i / bind-address-ipv4 option). To test whether this can work at all, I check it first with traceroute's -s option: # traceroute 8.8.8.8 -n -s 10.30.92.6 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * ... This I take to mean that connection using the tunnel's local address as source is not possible. What is possible (though only as root) is to specify the source interface: # traceroute 8.8.8.8 -n -i tun1 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.30.92.1 129.337 ms 297.758 ms 297.725 ms 2 * * * 3 198.144.152.17 297.653 ms 297.652 ms 297.650 ms ... So apparently the tun1 interface is working and it is possible to send packets through it. But selecting the source interface is not implemented in my actual target application (transmission), so I would like to get source address selection to work. What am I doing wrong?

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • The Partner Perspective from Oracle OpenWorld 2012 - IDC’s Darren Bibby report

    - by Richard Lefebvre
    Below is IDC’s Darren Bibby report on ‘The Partner Perspective from Oracle OpenWorld 2012’. If you missed the 2012 edition, I trust this will give you the willingness to attend next year one! October 26, 2012 I attended my fourth Oracle OpenWorld earlier in October. I always go in with the lens of, "What's in it for partners this year?" Although it's primarily thought of as a customer event - and yes, the bulk of the almost 50,000 attendees are customers - this year's conference was clearly the largest and most important partner event Oracle has ever run. Oracle PartnerNetwork (OPN) Exchange There were more partner attendees than ever, with Oracle citing somewhere around 5000. But the format for partners this year was different. And it was better. Traditionally, Oracle hosts a one-day only Partner Forum on the Sunday before the customer-focused conference begins. This year, the partner content still began on the Sunday, but the worldwide alliances and channels group created an exclusive track throughout the week, just for partners. It featured content specifically targeted towards partners, and was anchored at a nearby hotel. This was a great move for Oracle. The Oracle PartnerNetwork (OPN) team has been in a tricky position for years in that they have enough partners that they need a landmark event in the year, but perhaps not enough to justify a separate, worldwide, large, partner-only event. Coinciding a four day event with Oracle OpenWorld, where anybody who's anybody in the Oracle world attends anyway, is a good solution. The channels leadership team can build from this success for an even better conference next year. It's expected that they will follow a similar strategy. Cloud Announcements for Partners As for the content, it was primarily about the Cloud. For customers, for VARs, for ISVs, for everyone. There were five key Cloud related announcements for partners at the event: Cloud Builder Specialization. This is one of the first broader Specializations that isn't focused on one unique product. It is a designation for partners that offer design and implementation services for private cloud solutions. As such, it will surely be something that nearly every partner will consider, and many will pursue. New Specializations for Cloud Services. Unlike the broad, almost "strategy-level" Specialization above, there are a group of new product-based "merit badges" for many of the new Cloud offerings. Think about a Specialization for the Cloud version of HCM, for instance. Each of these particular specializations will also have Rapid Start implementation methodologies that allow a partner to offer a fixed scope and fixed price bid to customers. Based on the learnings from Oracle Consulting, this means a partner might be able to deliver Cloud HCM in six weeks for a fixed price. In the end, this means more consistent experiences for Oracle customers. Cloud Resale Program. For those partners who achieve one of these Cloud Specializations, it will mean they can actually resell the subscription-based Cloud product. This is important because it has been somewhat of a rarity in the emerging Cloud channel for partners to be able to "take the paper", take the revenue, do the billing, be first line of support etc. This is an important step for Oracle and one the partners will be happy to see. Cloud Referral Program. For those partners who are not as engaged with these specific Cloud products that the Specializations revolve around, there is a new referral program that provides an incentive to recommend Oracle Cloud products. This one-two punch of referral and resale programs is similar in many ways to other vendors who allow more committed partners to resell, while more casual partners can collect fees. It's the model that seems to work. The key to allow a company to resell a subscription product - something that is inherently delivered directly between the vendor and customer - is trust. Achieving a specialization is a good bar to have to meet. Platform as a Service for ISVs. Leveraging some of the overall announcements made by CEO Larry Ellison around a cloud version of its famous database, Oracle also outlined a new ability for ISVs to build cloud services on its new PaaS offering. Details were less available for this announcement, though it's an expected and fitting play for ISVs comfortable with Oracle technology who can now more easily build out cloud applications. There wasn't much talk of an app store to go along with this, but surely it's in the works. Specializations And "The Gap" Coming back to Specializations, Oracle PartnerNetwork (OPN) has 4600 partners worldwide that hold 20,000 Specializations. These are impressive numbers just three years into the new OPN framework. The actual number of Specializations has also grown significantly, up to 111 today and soon around 125 or so with the new Cloud designations. Oracle may need to look at grouping some of these and creating higher level, broader designations that partners could achieve by earning several Specializations in that group. At 125 and growing, this is a lot. On the top of the pyramid, Hitachi Ltd. successfully became the eleventh partner to make it to the highly prestigious Diamond level. Partner programs partially exist in order to recognize capable partners. And it's more than abundantly clear that the Diamond level does this. But I think Oracle has a gap. Specializations show capability in a very specific product area, and all sizes of partners can achieve these. The next level at which to show a level of expertise is the Advanced Specialization. However, this is a massive step up from the regular Specialization. The advanced level requires 50 people to have certification in that particular product area. Most other industry programs have similar higher level statuses, but none are even close to that number. Whereas a customer who sees an Oracle partner with an advanced specialization can be very sure of capability, there is a gap in that there are hundreds or even thousands of 20-50 person solution providers who are top notch in their area of expertise. They will never get to Advanced due to numbers alone. These boutique partners don't really have a way of showing off their talents in the current program. Advanced may not need to be so high to really show that a company has deep expertise. Overall it was a very successful Oracle OpenWorld for Oracle partners of all sizes. There was progress made on making it a bigger and more relevant event. And also on catching up and maybe even leading in some cases with cloud opportunities for partners.

    Read the article

  • Automating Form Login

    - by Greg_Gutkin
    Introduction A common task in configuring a web application for proxying in Pagelet Producer is setting up form autologin. PP provides a wizard-like tool for detecting the login form fields, but this is usually only the first step in configuring this feature. If the generated configuration doesn't seem to work, some additional manual modifications will be needed to complete the setup. This article will try to guide you through this process while steering you away from common pitfalls. For the purposes of this article, let's assume the following characteristics about your environment: Web Application Base URL: http://host/app (configured as Resource Source URL in PP) Pagelet Producer Base URL: http://pp/pagelets Form Field Auto-Detection Form Autologin is configured in the PP Admin UI under resource_name/Autologin/Form Login. First, you'll enter the URL to the login form under "Login Form Identification". This will enable the admin wizard to connect to and display the login page. Caution: RedirectsMake sure the entered URL matches what you see in the browser's address bar, when the application login page is displayed. For example, even though you may be able to reach the login page by simply typing http://host/app, the URL you end up on may change to http://host/app/login via browser redirect(s).The second URL is the one you will want to use. Caution: External Login ServersThe login page may actually come from a different server than the application you are trying to proxy. For example, you may notice that the login page URL changes to http://hostB/appB. This is common when external SSO products are involved. There are two ways of dealing with this situation. One is to configure Pagelet Producer to participate in SSO. This approach is out of scope of this article and is discussed in a separate whitepaper (TODO add link). The second approach is to use the autologin feature to provide stored credentials to the SSO login form. Since the login form URL is not an extension of the application base URL (PP resource URL), you will need to add a new PP resource for the SSO server and configure the login form on that resource instead of the original application resource. One side benefit of this additional resource is that it can reused for other applications relying on the same SSO server for login. After entering the login page URL (make sure dropdown says "URL"), click "Automatically Detect Form Fields". This will bring up the web app's login page in a new browser window. Fill it out and submit it as you would normally. If everything goes right, Pagelet Producer will intercept the submitted values and fill out all the needed configuration data in the Admin UI. If the login form window doesn't close or configuration data doesn't get filled in, you may have not entered the login page URL correctly. Review the two cautionary notes above and make any necessary changes. If the form fields got filled automatically, it's time to save the configuration and test it out. If you can access a protected area of the backend application via a proxied PP URL without filling out its login form, then you are pretty much done with login form configuration. The only other step you will need to complete before declaring this aspect of configuration production ready is configuring form field source. You may skip to that section below. Manual Login Form Identification Let's take a closer look at Login Form Identification. This determines how Pagelet Producer recognizes login forms as such. URL The most efficient way of detecting login forms is by looking at the page URL. This method can only be used under the following conditions: Login page URL must be different from the post login application URLs. Login page URL must stay constant regardless of the path it takes to reach the page. For example, reaching the login page by going to the application base URL or to a specific protected URL must result in a redirect to the same login page URL (query string excluded). If only the query string parameters change, just leave out the query string from the configured login page URL. If either of these conditions is not fullfilled, you must switch to the RegEx approach below. RegEx If the login page URL is not uniform enough across all scenarios or is indistinguishable from other page locations, PP can be configured to recognize it by looking at the page markup itself. This is accomplished by changing the dropdown to "RegEx". If regular expressions scare you, take comfort from the fact that in most cases you won't need to enter any special regex characters. Let's look at an example: Say you have a login form that looks like <form id='loginForm' action='login?from=pageA' > <input id='user'> <input id='pass'> </form> Since this form has an id attribute, you can be reasonably sure that this login form can be uniquely identified across the web application by this snippet: "id='loginForm'". (Unless, of course your backend web application contains login forms to other apps). Since no wildcards are needed to find this snippet, you can just enter it as is into the RegEx field - no special regular expression characters needed! If the web developer who created the form wasn't kind enough to provide a unique id, you will need to look for other snippets of the page to uniquely identify it. It could be the action URL, an input field id, or some other markup fragment. You should abstain from using UI text as an identifier it may change in translated versions of the page and prevent the login page logic from working for international users. You may need to turn to regular expression wildcard syntax if no simple matches work. For more information on regular expression, refer to the Resources section. Form Submit Location Now we'll look at the form submit location. If the captured URL contains query string parameters that will likely change from one form submission to the next, you will need to change its type to RegEx. This type will tell Pagelet Producer to parse the login page for the action URL and submit to the value found. The regular expression needs to point at the actual action URL with its first grouping expression. Taking the example form definition above, the form submit location regex would be: action='(.*?)' The parentheses are used to identify the actual action URL, while the rest of the expression provides the context for finding it. Expression .*? is a so-called reluctant wildcard that matches any character excluding the single quote that follows. See Resources section below for further information on regular expressions. Manual Form Field Detection If the Admin UI form field detection wizard fails to populate login form configuration page, you will have to enter the fields by hand. Use a built-in browser developer tool or addon (e.g. Firebug) to inspect the form element and its children input elements. For each input element (including hidden elements), create an entry under Form Fields. Change its Source according to the next section. Form Field Source Change the source of any of the fields not exposed to the users of the login form (i.e. hidden fields) to "Generated". This means Pagelet Producer will just use the values returned by the web app rather than supplying values it stored. For fields that contain sensitive data or vary from user to user (e.g. username & password), change the source to User (Credential) Vault. Logging Support To help you troubleshoot you autologin configuration, PP provides some useful logging support. To turn on detailed logging for the autologin feature, navigate to Settings in Admin UI. Under Logging, change the log level for AutoLogin to Finest. Known Limitations Autologin feature may not work as expected if login form fields (not just the values, but the DOM elements themselves) are generated dynamically by client side JavaScript. Resources RegEx RegEx Reference from Java RegEx Test Tool

    Read the article

  • Creating a multi-column rollover image gallery with HTML 5

    - by nikolaosk
    I know it has been a while since I blogged about HTML 5. I have two posts in this blog about HTML 5. You can find them here and here.I am creating a small content website (only text,images and a contact form) for a friend of mine.He wanted to create a rollover gallery.The whole concept is that we have some small thumbnails on a page, the user hovers over them and they appear enlarged on a designated container/placeholder on a page. I am trying not to use Javascript scripts when I am using effects on a web page and this is what I will be doing in this post.  Well some people will say that HTML 5 is not supported in all browsers. That is true but most of the modern browsers support most of its recommendations. For people who still use IE6 some hacks must be devised.Well to be totally honest I cannot understand why anyone at this day and time is using IE 6.0.That really is beyond me.Well, the point of having a web browser is to be able to ENJOY the great experience that the WE? offers today.  Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. In order to be absolutely clear this is not (and could not be ) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.For the people who are not convinced yet that they should invest time and resources on becoming experts on HTML 5 I should point out that HTML 5 websites will be ranked higher than others. Search engines will be able to locate better the content of our site and its relevance/importance since it is using semantic tags. Let's move now to the actual hands-on example. In this case (since I am mad Liverpool supporter) I will create a rollover image gallery of Liverpool F.C legends. I create a folder in my desktop. I name it Liverpool Gallery.Then I create two subfolders in it, large-images (I place the large images in there) and thumbs (I place the small images in there).Then I create an empty .html file called LiverpoolLegends.html and an empty .css file called style.css.Please have a look at the HTML Markup that I typed in my fancy editor package below<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"></head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Do hover over the images with the mouse to see the full picture</h2></header><ul id="column1"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/john-barnes.jpg" alt=""><img class="large" src="large-images/john-barnes-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/ian-rush.jpg" alt=""><img class="large" src="large-images/ian-rush-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/graeme-souness.jpg" alt=""><img class="large" src="large-images/graeme-souness-large.jpg" alt=""></a></li></ul><ul id="column2"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/steven-gerrard.jpg" alt=""><img class="large" src="large-images/steven-gerrard-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/kenny-dalglish.jpg" alt=""><img class="large" src="large-images/kenny-dalglish-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/robbie-fowler.jpg" alt=""><img class="large" src="large-images/robbie-fowler-large.jpg" alt=""></a></li></ul><ul id="column3"><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/alan-hansen.jpg" alt=""><img class="large" src="large-images/alan-hansen-large.jpg" alt=""></a></li><li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=8927200#"><img src="thumbs/michael-owen.jpg" alt=""><img class="large" src="large-images/michael-owen-large.jpg" alt=""></a></li></ul></body></html> It is very easy to follow the markup. Please have a look at the new doctype and the new semantic tag <header>. I have 3 columns and I place my images in there.There is a class called "large".I will use this class in my CSS code to hide the large image when the mouse is not on (hover) an image Make sure you validate your HTML 5 page in the validator found hereHave a look at the CSS code below that makes it all happen.img { border:none;}#column1 { position: absolute; top: 30; left: 100; }li { margin: 15px; list-style-type:none;}#column1 a img.large {  position: absolute; top: 0; left:700px; visibility: hidden;}#column1 a:hover { background: white;}#column1 a:hover img.large { visibility:visible;}#column2 { position: absolute; top: 30; left: 195px; }li { margin: 5px; list-style-type:none;}#column2 a img.large { position: absolute; top: 0; left:510px; margin-left:0; visibility: hidden;}#column2 a:hover { background: white;}#column2 a:hover img.large { visibility:visible;}#column3 { position: absolute; top: 30; left: 400px; width:108px;}li { margin: 5px; list-style-type:none;}#column3 a img.large { width: 260px; height:260px; position: absolute; top: 0; left:315px; margin-left:0; visibility: hidden;}#column3 a:hover { background: white;}#column3 a:hover img.large { visibility:visible;}?n the first line of the CSS code I set the images to have no border.Then I place the first column in the page and then remove the bullets from the list elements.Then I use the large CSS class to create a position for the large image and hide it.Finally when the hover event takes place I make the image visible.I repeat the process for the next two columns. I have tested the page with IE 10 and the latest versions of Opera,Chrome and Firefox.Feel free to style your HTML 5 gallery any way you want through the magic of CSS.I did not bother adding background colors and borders because that was beyond the scope of this post. Hope it helps!!!!

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • SharePoint 2010 Replaceable Parameter, some observations…

    - by svdoever
    SharePoint Tools for Visual Studio 2010 provides a rudimentary mechanism for replaceable parameters that you can use in files that are not compiled, like ascx files and your project property settings. The basics on this can be found in the documentation at http://msdn.microsoft.com/en-us/library/ee231545.aspx. There are some quirks however. For example: My Package name is MacawMastSP2010Templates, as defined in my Package properties: I want to use the $SharePoint.Package.Name$ replaceable parameter in my feature properties. But this parameter does not work in the “Deployment Path” property, while other parameters work there, while it works in the “Image Url” property. It just does not get expanded. So I had to resort to explicitly naming the first path of the deployment path: : You also see a special property for the “Receiver Class” in the format $SharePoint.Type.<GUID>.FullName$. The documentation gives the following description:The full name of the type matching the GUID in the token. The format of the GUID is lowercase and corresponds to the Guid.ToString(“D”) format (that is, xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx). Not very clear. After some searching it happened to be the guid as declared in my feature receiver code: In other properties you see a different set of replaceable parameters: We use a similar mechanism for replaceable parameter for years in our Macaw Solutions Factory for SharePoint 2007 development, where each replaceable parameter is a PowerShell function. This provides so much more power. For example in a feature declaration we can say: Code Snippet <?xml version="1.0" encoding="utf-8" ?> <!-- Template expansion      [[ProductDependency]] -> Wss3 or Moss2007      [[FeatureReceiverAssemblySignature]] -> for example: Macaw.Mast.Wss3.Templates.SharePoint.Features, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6e9d15db2e2a0be5      [[FeatureReceiverClass]] -> for example: Macaw.Mast.Wss3.Templates.SharePoint.Features.SampleFeature.FeatureReceiver.SampleFeatureFeatureReceiver --> <Feature Id="[[$Feature.SampleFeature.ID]]"   Title="MAST [[$MastSolutionName]] Sample Feature"   Description="The MAST [[$MastSolutionName]] Sample Feature, where all possible elements in a feature are showcased"   Version="1.0.0.0"   Scope="Site"   Hidden="FALSE"   ImageUrl="[[FeatureImage]]"   ReceiverAssembly="[[FeatureReceiverAssemblySignature]]"   ReceiverClass="[[FeatureReceiverClass]]"   xmlns="http://schemas.microsoft.com/sharepoint/">     <ElementManifests>         <ElementManifest Location="ExampleCustomActions.xml" />         <ElementManifest Location="ExampleSiteColumns.xml" />         <ElementManifest Location="ExampleContentTypes.xml" />         <ElementManifest Location="ExampleDocLib.xml" />         <ElementManifest Location="ExampleMasterPages.xml" />           <!-- Element files -->         [[GenerateXmlNodesForFiles -path 'ExampleDocLib\*.*' -node 'ElementFile' -attributes @{Location = { RelativePathToExpansionSourceFile -path $_ }}]]         [[GenerateXmlNodesForFiles -path 'ExampleMasterPages\*.*' -node 'ElementFile' -attributes @{Location = { RelativePathToExpansionSourceFile -path $_ }}]]         [[GenerateXmlNodesForFiles -path 'Resources\*.resx' -node 'ElementFile' -attributes @{Location = { RelativePathToExpansionSourceFile -path $_ }}]]     </ElementManifests> </Feature> We have a solution level PowerShell script file named TemplateExpansionConfiguration.ps1 where we declare our variables (starting with a $) and include helper functions: Code Snippet # ============================================================================================== # NAME: product:\src\Wss3\Templates\TemplateExpansionConfiguration.ps1 # # AUTHOR: Serge van den Oever, Macaw # DATE  : May 24, 2007 # # COMMENT: # Nota bene: define variable and function definitions global to be visible during template expansion. # # ============================================================================================== Set-PSDebug -strict -trace 0 #variables must have value before usage $global:ErrorActionPreference = 'Stop' # Stop on errors $global:VerbosePreference = 'Continue' # set to SilentlyContinue to get no verbose output   # Load template expansion utility functions . product:\tools\Wss3\MastDeploy\TemplateExpansionUtil.ps1   # If exists add solution expansion utility functions $solutionTemplateExpansionUtilFile = $MastSolutionDir + "\TemplateExpansionUtil.ps1" if ((Test-Path -Path $solutionTemplateExpansionUtilFile)) {     . $solutionTemplateExpansionUtilFile } # ==============================================================================================   # Expected: $Solution.ID; Unique GUID value identifying the solution (DON'T INCLUDE BRACKETS). # function: guid:UpperCaseWithoutCurlies -guid '{...}' ensures correct syntax $global:Solution = @{     ID = GuidUpperCaseWithoutCurlies -guid '{d366ced4-0b98-4fa8-b256-c5a35bcbc98b}'; }   #  DON'T INCLUDE BRACKETS for feature id's!!! # function: GuidUpperCaseWithoutCurlies -guid '{...}' ensures correct syntax $global:Feature = @{     SampleFeature = @{         ID = GuidUpperCaseWithoutCurlies -guid '{35de59f4-0c8e-405e-b760-15234fe6885c}';     } }   $global:SiteDefinition = @{     TemplateBlankSite = @{         ID = '12346';     } }   # To inherit from this content type add the delimiter (00) and then your own guid # ID: <base>00<newguid> $global:ContentType = @{     ExampleContentType = @{         ID = '0x01008e5e167ba2db4bfeb3810c4a7ff72913';     } }   #  INCLUDE BRACKETS for column id's and make them LOWER CASE!!! # function: GuidLowerCaseWithCurlies -guid '{...}' ensures correct syntax $global:SiteColumn = @{     ExampleChoiceField = @{         ID = GuidLowerCaseWithCurlies -guid '{69d38ce4-2771-43b4-a861-f14247885fe9}';     };     ExampleBooleanField = @{         ID = GuidLowerCaseWithCurlies -guid '{76f794e6-f7bd-490e-a53e-07efdf967169}';     };     ExampleDateTimeField = @{         ID = GuidLowerCaseWithCurlies -guid '{6f176e6e-22d2-453a-8dad-8ab17ac12387}';     };     ExampleNumberField = @{         ID = GuidLowerCaseWithCurlies -guid '{6026947f-f102-436b-abfd-fece49495788}';     };     ExampleTextField = @{         ID = GuidLowerCaseWithCurlies -guid '{23ca1c29-5ef0-4b3d-93cd-0d1d2b6ddbde}';     };     ExampleUserField = @{         ID = GuidLowerCaseWithCurlies -guid '{ee55b9f1-7b7c-4a7e-9892-3e35729bb1a5}';     };     ExampleNoteField = @{         ID = GuidLowerCaseWithCurlies -guid '{f9aa8da3-1f30-48a6-a0af-aa0a643d9ed4}';     }; } This gives so much more possibilities, like for example the elements file expansion where a PowerShell function iterates through a folder and generates the required XML nodes. I think I will bring back this mechanism, so it can work together with the built-in replaceable parameters, there are hooks to define you custom replacements as described by Waldek in this blog post.

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application     In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output     You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. UPDATE 13-12-2010: Thanks to Jason Pricket, it is now also possible to not show every activity in the build log. This is really useful when you are doing for-loops in your template. To see how you can do that, check out Jason's blog: http://blogs.msdn.com/b/jpricket/archive/2010/12/09/tfs-2010-making-your-build-log-less-noisy.aspx Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >