Search Results

Search found 4603 results on 185 pages for 'lower bound'.

Page 155/185 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • Old CRT screen's high resolution doesn't work anymore on windows 7

    - by Mixxiphoid
    One year ago I decided to switch from Windows XP to Windows 7. I had a 17" CRT monitor with a screen resolution of 1600x1200 which worked fine on Windows XP. While installing Windows 7 everything went well until Windows 7 was going to install the video card its driver. Windows 7 puts the screen to its recommended resolution and my screen became black. I waited a few minutes to be sure the installation was finished. I turned off the computer by hand and restarted the computer on a resolution of 800x640. When windows 7 was done installing I went to screen resolutions and the resolution of 1600x1200 was on the top of the list with '(recommended)' next to it. I tried putting it on 1600x1200 but again my screen went black. I installed all windows 7 updates including the video card driver from the NVidia site (NOT from Windows 7). I tried about everything to make it work on 1600x1200 but with no succes. The highest resolution I got with the crt monitor was 1280x1024. I had a TFT screen which had 1280x1024 as max resolution and had better colors, so I used that one till today. My video card is 9600GT and my power supply is beyond sufficient. I even tried to install the driver I had on XP to see if it worked, but no results. I tried classic mode on windows 7, changed the dpi, the frequentie and the monitor settings, but nothing worked. I really like a vertical resolution of 1200, but it seems today I'm bound to all those standard monitors with a resolution of 1980x1024... Can anybody explain to me what the cause is that it worked on Windows XP but not on Windows 7? And maybe a solution to the problem (I actually gave up on getting it fixed...) Thanks a lot in advance. SOLUTION I downloaded the according monitor driver and installed it. Next I rebooted my computer on low resolution (800x640) and connected the CRT monitor. When Windows 7 booted successfully I went to computer management and 'update the driver' of my monitor. I manually selected 'Generic PnP monitor' and made that one active. I want to advanced settings at 'Screen resolution' and selected the mode '1600x1200 (32-bit) 80 Hertz (95 Hertz did not work). Now I had my resolution on 1600x1200. I repeated the earlier step to select the original monitor again instead of the Generic monitor. Quite a way to solve this problem, but it worked! Thanks a lot you all.

    Read the article

  • How should I use LVM with Ganeti?

    - by javano
    I am building a small Ganeti cluster on some low end hardware (I only have the resources given sadly). I am confused as to the use of LVMs with DRBD. I have two instances and three nodes. What I want is instance1 replicated between node 1 & 2, and instance2 replicated between nodes 3 & 2 (so node2 is doing nothing, except waiting for either node1 or 3 to fail, is it is the secondary node for both instances). This is because node2 is a lower hardware spec than 1 and 3, so I just want it as an hot-spare. How can I achieve this? I don't want instance1 being replicated to node3 for example, nor instance2 replicated to node1. Nodes 1 & 2 have /dev/sda5 which is 150GBs (for example). Nodes 2 & 3 have /dev/sda6 which is also 75GBs (for example). Using just nodes 1 & 2, after looking at the Ganeti docs I would; vgcreate my-vg Next I would create the cluster via gnt-cluster VG = "my-vg". It is here I believe that I am missing some knowledge. I believe that what I need to do is create the same Logical Volume on nodes 1 & 2 in Volume Group "my-vg", that solely consists of /dev/sda5 and call it "lv1". Then create an Logical Volume on nodes 2 & 3 the solely consists of /dev/sda6 in "my-vg" that is called "lv2". When creating instance1 I would then use "-vg=lv1 -n node1:node2", and when creating instance2 I would use "-vg=lv2 -n node3:node2". I breifly had a go at this today and I'm dubious if this will be possible. When trying to create instance2, "lv2" wont exist on node1 (the cluster master) so I don't believe it will allow the instance creation. Could I create a 1kb parition (/dev/sda6) on node1 and put it into a LV called "lv2" or is that too flakey? Is this set up possible? Thank you.

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • Can't Move Windows to 2nd Monitor without Left Mouse and Cntl Key

    - by John C
    I have 2 very frustrating problems that maybe someone can help me with: I have 2 monitors (different sizes and resolutions) setup with the "Extended" monitor Win7 setup. My problem is this = I can not "move" a window from my Primary Monitor (larger and higher resolution on right side in front of me) to my Secondary 2nd monitor (smaller and lower resolution) with just selecting the title bar with the left mouse button and dragging it to the left. Windows 7 "snaps" it back to the left Primary Monitor when the window is physically in the 2nd window area as I'm holding the left mouse button. I can prevent this problem - by holding down the Cntl Key with the Left Mouse button, but this is extremely annoying to me. Also I typically "lose" focus if I try typing input on the 2nd monitor. Typing is erratic with regard to keystroke accuracy from my keyboard translated into input on the 2nd screen. No problem with typing input on the primary left monitor. I find this extremely annoying in Windows 7 and turning off the "snap" feature via the Control panel does NOT work for me. Win7 stubbornly refuses to move my selected window to my 2nd monitor without me "forcing" Win7 to do this with the Cntrl Key. Please tell me this is not a Win7 feature. Also on my system - Windows Key + Shift, Left arrow Key (pressed together) or the same combo with The Right arrow Key - don't do anything whatsoever. Widows Key with "+" however does maximize current window across both monitors, and I can "restore" it with Windows Key and "-" back to original monitor and size. I have tried various solutions including changing the resolutions of one or both of my monitors and sometimes "temporarily helps" but reverts back to the problem. Also if I swap the logical (not physical) layout so that I tell Win7 the monitors are setup in a reserved situation (Large monitor on the left, and small on the right) - this also sometimes helps for awhile - and is very strange and awkward to work with "backwards". But all of these solutions stop working. The only solution that consistently works for "moving" the screens is to hold the Cntrl Key down as I'm moving window with the left mouse selected on the title bar. Even that however, doesn't prevent the loss of typing focus for me on the 2nd monitor - while at the same time the typing on the 1st monitor is fine. Any help on moving my window screens from one monitor on my 2nd monitor without having to press the Cntrl key while holding down my left mouse button with be appreciated. Also any help on gaining typing "focus" into my 2nd screen with be helpful too. Thanks - John

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Init script & the green [ OK ]

    - by Lord Loh.
    I am trying to install fast-cgi for nginx on an EC2 instance. I followed the steps explained here, but that is meant for Debian and does not work out of the box for a red-hat based system. I modified the script a bit to look like - #!/bin/bash ### BEGIN INIT INFO # Provides: php-fcgi # Required-Start: $nginx # Required-Stop: $nginx # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts php over fcgi # Description: starts php over fcgi ### END INIT INFO . /etc/rc.d/init.d/functions (( EUID )) && echo .You need to have root priviliges.. && exit 1 BIND=/tmp/php.socket USER=nginx PHP_FCGI_CHILDREN=15 PHP_FCGI_MAX_REQUESTS=1000 PHP_CGI=/usr/bin/php-cgi PHP_CGI_NAME=`basename $PHP_CGI` PHP_CGI_ARGS="- USER=$USER PATH=/usr/bin PHP_FCGI_CHILDREN=$PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=$PHP_FCGI_MAX_REQUESTS $PHP_CGI -b $BIND" RETVAL=0 start() { echo -n "Starting PHP FastCGI: " #ORIGINAL LINE #daemon $PHP_CGI --quiet --start --background --chuid "$USER" --exec /usr/bin/env -- $PHP_CGI_ARGS #MODIFIED LINE daemon --user=$USER $PHP_CGI -b $BIND& RETVAL=$? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/php-fcgi #echo "$PHP_CGI_NAME." } stop() { echo -n "Stopping PHP FastCGI: " killall -q -w -u $USER $PHP_CGI RETVAL=$? echo "$PHP_CGI_NAME." rm /var/lock/subsys/php-fcgi } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL The problem I have now is - service php-fcgi start keeps the shell blocked. If I run service php-fcgi start & and then ps aux, I see the php-cgi process running bound to the socket. I see the start command stop only when I execute service php-fcgi stop. How do I solve this blocking issue? I have tried adding an & at the end of the line spawning the daemon. But other scripts do not seem to be doing this. This is the most complicated script I am attempting to modify yet :-( How do I get the script to display the green [ OK ]? I checked scripts like httpd and saw that all they were doing was something as shown below. But I never see a green [ OK ] when I execute php-fcgi. I also discovered that putting echo_success with functions sourced displays the green [ OK ] but I do not see any other scripts in the /etc/rc.d/init.d/ executing echo_success or echo_failure. What have I got wrong? Also, How do i specify PHP_FCGI_CHILDREN with daemon? echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • Windows Server 2008 R2 Firewall - Interface specific rules

    - by Mehmet Ergut
    I'm trying to define per interface rules, much like it was in Server 2003. We will be replacing our old 2003 server with a new 2008 R2 server. The server runs IIS and SQL Server. It's a dedicated server at the hosting company. We use a OpenVPN connection from the office to access SQL server, RDesktop, FTP and other administrative services. Only http and ssh is listening on the public interface. On the old server running 2003, I was able to define global rules for http and ssh, and allow other services only on the vpn interface. I can't find a way to do the same on 2008 R2. I understand that there is the Network Location Awareness service, firewall rules are applied according to the current network location. But I don't understand the purpose of this on a server. The only close solution I found is to set the scope on the firewall rule and restrict remote ip addresses to the private subnet of the office. But the ports will still be listening on the public interface. So how can I restrict a firewall rule to the connections coming from the vpn interface ? A note on this page states that scoping a rule to an interface does not exist anymore: In earlier versions of Windows, many of these command accepted a parameter called interface. This parameter is not supported in the firewall context in Windows Vista or later versions of Windows. I can't believe that they simply decided to remove a core firewall functionality that every firewall has. There must be a way to restrict a rule to an interface. Any ideas ? I'm still unable to find an adequate solution to my problem. So for now, my workaround is this: Administrative services listen on VPN IP address Firewall rules restrict the scope to the local IP address of VPN Public services listen on all interfaces, no scope restriction on firewall rules This is not optimal, if I change the IP address of the VPN, I need to edit the firewall rules too. It won't be the case if the rules were bound to the interface.

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • Cooling Server Rack with Water? Sensible? Reuse energy for small installation?

    - by TomTom
    First - this is not a shopping question, this is not so much about concrete prices but about general feasibility. Makes no sense to get looking fo ra manufacturer it the approach is bad. I am moving my company to new Offices in September, and among them we will expand and consolidate our number crunch cluster. It is so far in a data center. I have a nice room in the basement prepared now. I think about cooling. We will likely run up a power usage of around 10kw by end of the year. That is a LOT of stuff, and cooling will be expensive. I am located in south Poland, close to the German border. This is an area where water is available for relatively cheap price - "wasting water" is not a concern here. My situation is thus a lot different for example than in Spain ;) Physics tells me that to heat 1 liter of water by 1 degree I use 1 Calorie (1KCal), and a kwh power is (and we can assume 100% efficiency - water heaters are pretty efficient) 750 Calories. That means that 1 KWH is 750 liter by 1 degree. 10kw and a 20 degree heat would mean that per hour I need 375 liters. That is 6.25 liters per minute and not WHAT much ;) We talk 270 cubic meters here. Even in summer, the significant underground pipes really cool down the water a LOT more ;) Question: This such an approach feasible? Anyone done that? We talk of a 10kw installation for now. Is it feasible to reuse that heat? The alternative is a decent cooling system that WILL use around 2.5kwh for running. Dropping the water would basically (a) get me a quite cold input compared to the outside air even in summer (I.e. a lower temperature medium to drop the heat in) and (b) replace the need to actually have the outside cooling (which may b problematic - if the air is 22 degree, that is a LOT to fight off, but OTOH the water will be quite cold). I also would possibly save the investment for the outside part of the cooling circuit. Now, second question - is there a feasible way to heat a house with that? ;) After all, brutally speaking, it is a LOT of energy in that water ;) If it is a bad idea, I stop here - if it is not, I start looking for suppliers. Maybe my math is wrong?

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Bridging and iptables SNAT conflict

    - by sad_admin
    Hello I am working on a setup here and have it working with one minor exception. Devices on one side of my bridge aren't getting SNAT'd to the Internet. The Diagram / Overview: Primary_Network (Site_A) | | Internet ------- Linux_Bridge_GW (GW) | | Secondary/CoLo Site (Site_B) Here is the setup: 1.) Site_A has all the production servers and workstations. 2.) Site_B has a set of servers that we would like to fail-over to and also serve our internet facing services from. 3.) GW has two interfaces that are trunked and carrying the appropriate VLAN traffic (allow layer-2 propagation of traffic between sites) //this all works perfectly fine. 4.) The problem that is being encountered is, hosts from Site_B have their default GW at Site_A (same subnet) GW does not have IPs on the VLANs that are being passed. 5.) All hosts at Site_A can reach the Internet without problem. 6.) GW has an addresses on a subnet that is ONLY for Internet destined traffic. (This was done so that Websense would not have to parse unnecessary traffic. We use this VLAN as the monitor port's source on the switch where Websense is sitting). What I think is happening: 1.) Packet/Frame comes in on physdev at Site_B destined for Internet. 2.) Kernel sees packet, and forwards it out the other side of the bridge to that host's default GW. 3.) Site_A (containing core-network's Default-GW) sees that packet is destined for a host it doesn't know about, so it sends it to it's default GW (the linux bridge, since it's Internet bound). 4.) The kernel says "Hey, I've seen you before" and therefore doesn't do SNAT'ing on the packet and sends it out to the Internet where it's black-holed. Why I think it's happening: 1.) A tcpdump on the internet facing NIC shows the packet leaving the interface with the private address as it's source. What I would like: 1.) Have the packet SNAT'd. 2.) Something like the below would be awesome a.) packet comes in from Site_B b.) kernel sees that the packet is NOT destined for itself or any private address c.) kernel says "OK, well since you're destined for the Internet I'm going to send you out this interface rather than forward you to your normal default GW that's WAAAY over there." d.) packet comes in from internet and is sent out the appropriate bridge physdev depending on which site the host it's destined for is at. Thanks for any assistance or guidance that you are willing to offer. Best Regards, Sad Admin

    Read the article

  • Reading input from a text file, omits the first and adds a nonsense value to the end?

    - by Greenhouse Gases
    Hi there When I input locations from a txt file I am getting a peculiar error where it seems to miss off the first entry, yet add a garbage entry to the end of the link list (it is designed to take the name, latitude and longitude for each location you will notice). I imagine this to be an issue with where it starts collecting the inputs and where it stops but I cant find the error!! It reads the first line correctly but then skips to the next before adding it because during testing for the bug it had no record of the first location Lisbon though whilst stepping into the method call it was reading it. Very bizarre but hopefully someone knows the issue. Here is firstly my header file: #include <string> struct locationNode { char nodeCityName [35]; double nodeLati; double nodeLongi; locationNode* Next; void CorrectCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(n <= MAX_SIZE - 1) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } n++; } //cityNameInput = this->nodeCityName; } }; class Locations { private: int size; public: Locations(){ }; // constructor for the class locationNode* Head; //int Add(locationNode* Item); }; And here is the file containing main: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations.h" #include <iostream> #include <string> #include <fstream> using namespace std; int n = 0,x, locationCount = 0, MAX_SIZE = 35; string cityNameInput; char targetCity[35]; bool acceptedInput = false, userInputReq = true, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *start_ptr = NULL; // pointer to first entry in the list locationNode *temp, *temp2; // Part is a pointer to a new locationNode we can assign changing value followed by a call to Add locationNode *seek, *bridge; void setElementsNull(char cityParam[]) { int y=0, count =0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void addLocation() { temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it if(!userInputReq) // bool that determines whether user input is required in adding the node to the list { cout << endl << "Enter the name of the location: "; cin >> temp->nodeCityName; temp->CorrectCase(); setElementsNull(temp->nodeCityName); cout << endl << "Please enter the latitude value for this location: "; cin >> temp->nodeLati; cout << endl << "Please enter the longitude value for this location: "; cin >> temp->nodeLongi; cout << endl; } temp->Next = NULL; //set to NULL as when one is added it is currently the last in the list and so can not point to the next if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list if(!userInputReq){ cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } } void populateList(){ ifstream inputFile; inputFile.open ("locations.txt", ios::in); userInputReq = true; temp = new locationNode; // declare the space for a pointer item and assign a temporary pointer to it do { inputFile.get(temp->nodeCityName, 35, ' '); setElementsNull(temp->nodeCityName); inputFile >> temp->nodeLati; inputFile >> temp->nodeLongi; setElementsNull(temp->nodeCityName); if(temp->nodeCityName[0] == 10) //remove linefeed from input { for(int i = 0; temp->nodeCityName[i] != NULL; i++) { temp->nodeCityName[i] = temp->nodeCityName[i + 1]; } } addLocation(); } while(!inputFile.eof()); userInputReq = false; cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; cout << endl; inputFile.close(); } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } void deleteRecord() // complete this { int junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek != start_ptr && seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } } else { cout << targetCity << "That entry does not currently exist" << endl << endl << endl; } } void searchDatabase() { char choice; cout << "Enter search term..." << endl; cin >> targetCity; if(nodeExistTest(targetCity)) { cout << "Entry: " << endl << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': addLocation(); break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } } void printDatabase() { temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); } void nameValidation(string name) { n = 0; // start from first letter x = name.size(); while(!acceptedInput) { if((name[n] >= 65) && (name[n] <= 122)) // is in the range of letters { while(n <= x - 1) { while((name[n] >=91) && (name[n] <=97)) // ERROR!! { cout << "Please enter a valid city name" << endl; cin >> name; } n++; } } else { cout << "Please enter a valid city name" << endl; cin >> name; } if(n <= x - 1) { acceptedInput = true; } } cityNameInput = name; } int main(array<System::String ^> ^args) { //main contains test calls to functions at present cout << "Populating list..."; populateList(); printDatabase(); deleteRecord(); printDatabase(); cin >> cityNameInput; } The text file contains this (ignore the names, they are just for testing!!): Lisbon 45 47 Fattah 45 47 Darius 42 49 Peter 45 27 Sarah 85 97 Michelle 45 47 John 25 67 Colin 35 87 Shiron 40 57 George 34 45 Sean 22 33 The output omits Lisbon, but adds on a garbage entry with nonsense values. Any ideas why? Thank you in advance.

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Animate Text and Objects in PowerPoint 2010

    - by DigitalGeekery
    Are you looking for an eye catching way to keep your audience interested in your PowerPoint presentations? Today we’ll take a look at how to add animation effects to objects in PowerPoint 2010. Select the object you wish to animate and then click the More button in the Animation group of the Animation tab.   Animations are grouped into four categories. Entrance effects, Exit effects, Emphasis effects, and Motion Paths. You can get a Live Preview of how the animation will look by hovering your mouse over an animation effect.   When you select a Motion Path, your object will move along the dashed path line as shown on the screen. (This path is not displayed in the final output) Certain aspects of the Motion Path effects are editable. When you apply a Motion Path animation to an object, you can select the path and drag the end to change the length or size of the path. The green marker along the motion path marks the beginning of the  path and the red marks the end. The effects can be rotated by clicking and the bar near the center of the effect.   You can display additional effects by choosing one of the options at the bottom. This will pop up a Change Effect window. If you have Preview Effect checked at the lower left you can preview the effects by single clicking.   Apply Multiple Animations to an Object Select the object and then click the Add Animation button to display the animation effects. Just as we did with the first effect, you can hover over to get a live preview. Click to apply the effect. The animation effects will happen in the order they are applied. Animation Pane You can view a list of the animations applied to a slide by opening the Animation Pane. Select the Animation Pane button from the Advanced Animation group to display the Animation Pane on the right. You’ll see that each animation effect in the animation pane has an assigned number to the left.    Timing Animation Effects You can change when your animation starts to play. By default it is On Click. To change it, select the effect in the Animation Pane and then choose one of the options from the Start dropdown list. With Previous starts at the same time as the previous animation and After Previous starts after the last animation. You can also edit the duration that the animations plays and also set a delay.   You can change the order in which the animation effects are applied by selecting the effect in the animation pane and clicking Move Earlier or Move Later from the Timing group on the Animation tab. Effect Options If the Effect Options button is available when your animation is selected, then that particular animation has some additional effect settings that can be configured. You can access the Effect Option by right-clicking on the the animation in the Animation Pane, or by selecting Effect Options on the ribbon.   The available options will vary by effect and not all animation effects will have Effect Options settings. In the example below, you can change the amount of spinning and whether the object will spin clockwise or counterclockwise.   Under Enhancements, you can add sound effects to your animation. When you’re finished click OK.   Animating Text Animating Text works the same as animating an object. Simply select your text box and choose an animation. Text does have some different Effect Options. By selecting a sequence, you decide whether the text appears as one object, all at once, or by paragraph. As is the case with objects, there will be different available Effect Options depending on the animation you choose. Some animations, such as the Fly In animation, will have directional options.   Testing Your Animations Click on the Preview button at any time to test how your animations look. You can also select the Play button on the Animation Pane. Conclusion Animation effects are a great way to focus audience attention on important points and hold viewers interest in your PowerPoint presentations. Another cool way to spice up your PPT 2010 presentations is to add video from the web. What tips do you guys have for making your PowerPoint presentations more interesting? Similar Articles Productive Geek Tips Center Pictures and Other Objects in Office 2007 & 2010Preview Before You Paste with Live Preview in Office 2010Embed True Type Fonts in Word and PowerPoint 2007 DocumentsHow to Add Video from the Web in PowerPoint 2010Add Artistic Effects to Your Pictures in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials

    Read the article

  • Oracle Announces New Oracle Exastack Program for ISV Partners

    - by pfolgado
    Oracle Exastack Program Enables ISV Partners to Leverage a Scalable, Integrated Infrastructure to Deliver Their Applications Tuned and Optimized for High-Performance News Facts Enabling Independent Software Vendors (ISVs) and other members of Oracle Partner Network (OPN) to rapidly build and deliver faster, more reliable applications to end customers, Oracle today introduced Oracle Exastack Ready, available now, and Oracle Exastack Optimized, available in fall 2011 through OPN. The Oracle Exastack Program focuses on helping ISVs run their solutions on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud -- integrated systems in which the software and hardware are engineered to work together. These products provide partners with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Leveraging the new Oracle Exastack Program in which applications can qualify as Oracle Exastack Ready or Oracle Exastack Optimized, partners can use available OPN resources to optimize their applications to run faster and more reliably -- providing increased performance to their end users. By deploying their applications on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud, ISVs can reduce the cost, time and support complexities typically associated with building and maintaining a disparate application infrastructure -- enabling them to focus more on their core competencies, accelerating innovation and delivering superior value to customers. After qualifying their applications as Oracle Exastack Ready, partners can note to customers that their applications run on and support Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud component products including Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server. Customers can be confident when choosing a partner's Oracle Exastack Optimized application, knowing it has been tuned by the OPN member on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud with a goal of delivering optimum speed, scalability and reliability. Partners participating in the Oracle Exastack Program can also leverage their Oracle Exastack Ready and Oracle Exastack Optimized applications to advance to Platinum or Diamond level in OPN. Oracle Exastack Programs Provide ISVs a Reliable, High-Performance Application Infrastructure With the Oracle Exastack Program ISVs have several options to qualify and tune their applications with Oracle Exastack, including: Oracle Exastack Ready: Oracle Exastack Ready provides qualifying partners with specific branding and promotional benefits based on their adoption of Oracle products. If a partner application supports the latest major release of one of these products, the partner may use the corresponding logo with their product marketing materials: Oracle Solaris Ready, Oracle Linux Ready, Oracle Database Ready, and Oracle WebLogic Ready. Oracle Exastack Ready is available to OPN members at the Gold level or above. Additionally, OPN members participating in the program can leverage their Oracle Exastack Ready applications toward advancement to the Platinum or Diamond levels in the OPN Specialized program and toward achieving Oracle Exastack Optimized status. Oracle Exastack Optimized: When available, for OPN members at the Gold level or above, Oracle Exastack Optimized will provide direct access to Oracle technical resources and dedicated Oracle Exastack lab environments so OPN members can test and tune their applications to deliver optimal performance and scalability on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud. Oracle Exastack Optimized will provide OPN members with specific branding and promotional benefits including the use of the Oracle Exastack Optimized logo. OPN members participating in the program will also be able to leverage their Oracle Exastack Optimized applications toward advancement to Platinum or Diamond level in the OPN Specialized program. Oracle Exastack Labs and ISV Enablement: Dedicated Oracle Exastack lab environments and related technical enablement resources (including Guided Learning Paths and Boot Camps) will be available through OPN for OPN members to further their knowledge of Oracle Exastack offerings, and qualify their applications for Oracle Exastack Optimized or Oracle Exastack Ready. Oracle Exastack labs will be available to qualifying OPN members at the Gold level or above. Partners are eligible to participate in the Oracle Exastack Ready program immediately, which will help them meet the requirements to attain Oracle Exastack Optimized status in the future. Guidelines for Oracle Exastack Optimized, as well as Oracle Exastack Labs will be available in fall 2011. Supporting Quotes "In order to effectively differentiate their software applications in the marketplace, ISVs need to rapidly deliver new capabilities and performance improvements," said Judson Althoff, Oracle senior vice president of Worldwide Alliances and Channels and Embedded Sales. "With Oracle Exastack, ISVs have the ability to optimize and deploy their applications with a complete, integrated and cloud-ready infrastructure that will help them accelerate innovation, unlock new features and functionality, and deliver superior value to customers." "We view performance as absolutely critical and a key differentiator," said Tom Stock, SVP of Product Management, GoldenSource. "As a leading provider of enterprise data management solutions for securities and investment management firms, with Oracle Exadata Database Machine, we see an opportunity to notably improve data processing performance -- providing high quality 'golden copy' data in a reduced timeframe. Achieving Oracle Exastack Optimized status will be a stamp of approval that our solution will provide the performance and scalability that our customers demand." "As a leading provider of Revenue Intelligence solutions for telecommunications, media and entertainment service providers, our customers continually demand more readily accessible, enriched and pre-analyzed information to minimize their financial risks and maximize their margins," said Alon Aginsky, President and CEO of cVidya Networks. "Oracle Exastack enables our solutions to deliver the power, infrastructure, and innovation required to transform our customers' business operations and stay ahead of the game." Supporting Resources Oracle PartnerNetwork (OPN) Oracle Exastack Oracle Exastack Datasheet Judson Althoff blog Connect with the Oracle Partner community at OPN on Facebook, OPN on LinkedIn, OPN on YouTube, or OPN on Twitter

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • Installing Visual Studio Team Foundation Server Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. Although I had no errors on my main computer, my netbook did have problems. Although I am not ready to call it a Service Pack problem just yet! Update 2011-03-10 – Running the Team Foundation Server 2010 Service Pack 1 install a second time worked As per Brian's post I am installing the Team Foundation Server Service Pack first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. This however does not affect you if you are running Visual Studio and Team Foundation Server on separate computers as is normal in a production deployment. Main workhorse I will be installing the service pack first on my main computer as I want to actually use it here. Figure: My main workhorse I will also be installing this on my netbook which is obviously of significantly lower spec, but I will do that one after. Although, as always I had my fingers crossed, I was not really worried. Figure: KB2182621 Compared to Visual Studio there are not really a lot of components to update. Figure: TFS 2010 and SQL 2008 are the main things to update There is no “web” installer for the Team Foundation Server 2010 Service Pack, but that is ok as most people will be installing it on a production server and will want to have everything local. I would have liked a Web installer, but the added complexity for the product team is not work the capability for a 500mb patch. Figure: There is currently no way to roll SP1 and RTM together Figure: No problems with the file verification, phew Figure: Although the install took a while, it progressed smoothly   Figure: I always like a success screen Well, as far as the install is concerned everything is OK, but what about TFS? Can I still connect and can I still administer it. Figure: Service Pack 1 is reflected correctly in the Administration Console I am confident that there are no major problems with TFS on my system and that it has been updated to SP1. I can do all of the things that I used before with ease, and with the new features detailed by Brian I think I will be happy. Netbook The great god Murphy has stuck, and my poor wee laptop spat the Team Foundation Server 2010 Service Pack 1 out so fast it hit me on the back of the head. That will teach me for not looking… Figure: “Installation did not succeed” I am pretty sure should not be all caps! On examining the file I found that everything worked, except the actual Team Foundation Server 2010 serving step. Action: System Requirement Checks... Action complete Action: Downloading and/or Verifying Items c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp: Verifying signature for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp Signature verified successfully for VS10-KB2182621.msp c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi: Verifying signature for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACFramework_enu.msi Signature verified successfully for DACFramework_enu.msi c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi: Verifying signature for DACProjectSystemSetup_enu.msi Exists: evaluating Exists evaluated to false c:\757fe6efe9f065130d4838081911\DACProjectSystemSetup_enu.msi Signature verified successfully for DACProjectSystemSetup_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi: Verifying signature for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\TSqlLanguageService_enu.msi Signature verified successfully for TSqlLanguageService_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi: Verifying signature for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_x86_enu.msi Signature verified successfully for SharedManagementObjects_x86_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi: Verifying signature for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SharedManagementObjects_amd64_enu.msi Signature verified successfully for SharedManagementObjects_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi: Verifying signature for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_x86_enu.msi Signature verified successfully for SQLSysClrTypes_x86_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi: Verifying signature for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\SQLSysClrTypes_amd64_enu.msi Signature verified successfully for SQLSysClrTypes_amd64_enu.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab: Verifying signature for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.cab Signature verified successfully for vcruntime\Vc_runtime_x86.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi: Verifying signature for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x86.msi Signature verified successfully for vcruntime\Vc_runtime_x86.msi c:\757fe6efe9f065130d4838081911\SetupUtility.exe: Verifying signature for SetupUtility.exe c:\757fe6efe9f065130d4838081911\SetupUtility.exe Signature verified successfully for SetupUtility.exe c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab: Verifying signature for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.cab Signature verified successfully for vcruntime\Vc_runtime_x64.cab c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi: Verifying signature for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\vcruntime\Vc_runtime_x64.msi Signature verified successfully for vcruntime\Vc_runtime_x64.msi c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe: Verifying signature for NDP40-KB2468871.exe c:\757fe6efe9f065130d4838081911\NDP40-KB2468871.exe Signature verified successfully for NDP40-KB2468871.exe Action complete Action: Performing actions on all Items Entering Function: BaseMspInstallerT >::PerformAction Action: Performing Install on MSP: c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp targetting Product: Microsoft Team Foundation Server 2010 - ENU Returning IDOK. INSTALLMESSAGE_ERROR [Error 1935.An error occurred during the installation of assembly 'Microsoft.TeamFoundation.WebAccess.WorkItemTracking,version="10.0.0.0",publicKeyToken="b03f5f7f11d50a3a",processorArchitecture="MSIL",fileVersion="10.0.40219.1",culture="neutral"'. Please refer to Help and Support for more information. HRESULT: 0x80070005. ] Returning IDOK. INSTALLMESSAGE_ERROR [Error 1712.One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible.] Patch (c:\757fe6efe9f065130d4838081911\VS10-KB2182621.msp) Install failed on product (Microsoft Team Foundation Server 2010 - ENU). Msi Log: MSI returned 0x643 Entering Function: MspInstallerT >::Rollback Action Rollback changes PerformMsiOperation returned 0x643 PerformMsiOperation returned 0x643 OnFailureBehavior for this item is to Rollback. Action complete Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:14:09). Figure: Error log for Team Foundation Server 2010 install shows a failure As there is really no information in this log as to why the installation failed so I checked the event log on that box. Figure: There are hundreds of errors and it actually looks like there are more problems than a failed Service Pack I am going to just run it again and see if it was because the netbook was slow to catch on to the update. Hears hoping, but even if it fails, I would question the installation of Windows (PDC laptop original install) before I question the Service Pack Figure: Second run through was successful I don’t know if the laptop was just slow, or what… Did you get this error? If you did I will push this to the product team as a problem, but unless more people have this sort of error, I will just look to write this off as a corrupted install of Windows and reinstall.

    Read the article

  • Searching for tasks with code – Executables and Event Handlers

    Searching packages or just enumerating through all tasks is not quite as straightforward as it may first appear, mainly because of the way you can nest tasks within other containers. You can see this illustrated in the sample package below where I have used several sequence containers and loops. To complicate this further all containers types, including packages and tasks, can have event handlers which can then support the full range of nested containers again. Towards the lower right, the task called SQL In FEL also has an event handler not shown, within which is another Execute SQL Task, so that makes a total of 6 Execute SQL Tasks 6 tasks spread across the package. In my previous post about such as adding a property expressionI kept it simple and just looked at tasks at the package level, but what if you wanted to find any or all tasks in a package? For this post I've written a console program that will search a package looking at all tasks no matter how deeply nested, and check to see if the name starts with "SQL". When it finds a matching task it writes out the hierarchy by name for that task, starting with the package and working down to the task itself. The output for our sample package is shown below, note it has found all 6 tasks, including the one on the OnPreExecute event of the SQL In FEL task TaskSearch v1.0.0.0 (1.0.0.0) Copyright (C) 2009 Konesans Ltd Processing File - C:\Projects\Alpha\Packages\MyPackage.dtsx MyPackage\FOR Counter Loop\SQL In Counter Loop MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL\OnPreExecute\SQL On Pre Execute for FEL SQL Task MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SEQ Nested Lvl 2\SQL In Nested Lvl 2 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #1 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #2 6 matching tasks found in package. The full project and code is available for download below, but first we can walk through the project to highlight the most important sections of code. This code has been abbreviated for this description, but is complete in the download. First of all we load the package, and then start by looking at the Executables for the package. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { int matchCount = 0; // Look in the package's executables ProcessExecutables(package.Executables, ref matchCount); ... // // ... // Write out final count Console.WriteLine("{0} matching tasks found in package.", matchCount); } The ProcessExecutables method is a key method, as an executable could be described as the the highest level of a working functionality or container. There are several of types of executables, such as tasks, or sequence containers and loops. To know what to do next we need to work out what type of executable we are dealing with as the abbreviated version of method shows below. private static void ProcessExecutables(Executables executables, ref int matchCount) { foreach (Executable executable in executables) { TaskHost taskHost = executable as TaskHost; if (taskHost != null) { ProcessTaskHost(taskHost, ref matchCount); ProcessEventHandlers(taskHost.EventHandlers, ref matchCount); continue; } ... // // ... ForEachLoop forEachLoop = executable as ForEachLoop; if (forEachLoop != null) { ProcessExecutables(forEachLoop.Executables, ref matchCount); ProcessEventHandlers(forEachLoop.EventHandlers, ref matchCount); continue; } } } As you can see if the executable we find is a task we then call out to our ProcessTaskHost method. As with all of our executables a task can have event handlers which themselves contain more executables such as task and loops, so we also make a call out our ProcessEventHandlers method. The other types of executables such as loops can also have event handlers as well as executables. As shown with the example for the ForEachLoop we call the same ProcessExecutables and ProcessEventHandlers methods again to drill down into the hierarchy of objects that the package may contain. This code needs to explicitly check for each type of executable (TaskHost, Sequence, ForLoop and ForEachLoop) because whilst they all have an Executables property this is not from a common base class or interface. This example was just a simple find a task by its name, so ProcessTaskHost really just does that. We also get the hierarchy of objects so we can write out for information, obviously you can adapt this method to do something more interesting such as adding a property expression. private static void ProcessTaskHost(TaskHost taskHost, ref int matchCount) { if (taskHost == null) { return; } // Check if the task matches our match name if (taskHost.Name.StartsWith(TaskNameFilter, StringComparison.OrdinalIgnoreCase)) { // Build up the full object hierarchy of the task // so we can write it out for information StringBuilder path = new StringBuilder(); DtsContainer container = taskHost; while (container != null) { path.Insert(0, container.Name); container = container.Parent; if (container != null) { path.Insert(0, "\\"); } } // Write the task path // e.g. Package\Container\Event\Task Console.WriteLine(path); Console.WriteLine(); // Increment match counter for info matchCount++; } } Just for completeness, the other processing method we covered above is for event handlers, but really that just calls back to the executables. This same method is called in our main package method, but it was omitted for brevity here. private static void ProcessEventHandlers(DtsEventHandlers eventHandlers, ref int matchCount) { foreach (DtsEventHandler eventHandler in eventHandlers) { ProcessExecutables(eventHandler.Executables, ref matchCount); } } As hopefully the code demonstrates, executables (Microsoft.SqlServer.Dts.Runtime.Executable) are the workers, but within them you can nest more executables (except for task tasks).Executables themselves can have event handlers which can in turn hold more executables. I have tried to illustrate this highlight the relationships in the following diagram. Download Sample code project TaskSearch.zip (11KB)

    Read the article

  • Silverlight 5 Hosting :: Features in Silverlight 5 and Release Date

    - by mbridge
    Silverlight 5 is finally announced in the Silverlight FireStarter Event on the 2nd December, 2010. This new version of Silverlight which was earlier labeled as 'Future of Microsoft Silverlight' has now come much closer to go live as the first Silverlight 5 Beta version is expected to be shipped during the early months of 2011. However for the full fledged and the final release of Silverlight 5, we have to wait many more months as the same is likely to be made available within the Q3 2011. As would have been usually expected, this latest edition would feature many new capabilities thereby extending the developer productivity to a whole new dimension of premium media experience and feature-rich business applications. It comes along with many new feature updates as well as the inclusion of new technologies to improve the standard of the Silverlight applications which are now fine-tuned to produce next generation business and media solutions that is capable to meet the requirements of the advanced web-based app development. The Silverlight 5 is all set to replace the previous fourth version which now includes more than forty new features while also dropping various deprecated elements that was prevalent earlier. It has brought around some major performance enhancements and also included better support for various other tools and technologies. Following are some of the changes that are registered to be available under the Silverlight 5 Beta edition which is scheduled to be launched during the Q1 2011. Silverlight 5 : Premium Media Experiences The media features of Silverlight 5 has seen some major enhancements with a lot of optimizations being made to deliver richer solutions. It's capability has now been extended to make things easier, faster and capable of performing the desired tasks in the most efficient manner. The Silverlight media solutions has already been a part of many companies in the recent days where various on-demand Silverlight services were featured but with the arrival of the next generation premium media solution of Silverlight 5, it is expected to register new heights of success and global user acclamation for using it with many esteemed web-based projects and media solutions. - The most happening element in the new Silverlight 5 will be its support for utilizing the GPU based hardware acceleration which is intended to lower down the CPU load to a significant extent and thereby allowing faster rendering of media contents without consuming much resources. This feature is believed to be particularly helpful for low configured machines to run full HD media content without any lagging caused due to processor load. It will hence be one great feature to revolutionize the new generation high quality media contents to be available within the web in a more efficient manner with its hardware decoded video playback capabilities. - With the inclusion of hardware video decoding to minimize the processor load, the Silverlight 5 also comes with another optimization enhancement to also reduce the power consumption level by making new methods to deal with the power-saver settings. With this optimization in effect, the computer would be automatically allowed to switch to sleep mode while no video playback is in progress and also to prevent any screensavers to popup and cause annoyances during any video playback. There would also be other power saver options which will be made available to best suit the users requirements and purpose. - The Silverlight trickplay feature is another great way to tweak any silverlight powered media content as is used for many video tutorial sites or for dealing with any sort of presentations. This feature enables the user to modify the playback speed to either slowdown or speedup during the playback durations based on the requirements without compromising on the quality of output. Normally such manipulations always makes the content's audio to go off-pitch, but the same will not be the case with TrickPlay and the audio would seamlessly progress with the video without skipping any of its part. - In addition to all of the above, the new Silverlight 5 will be featuring wireless control of all the media contents by making use of remote controllers. With the use of such remote devices, it will be easier to handle the various media playback controls thereby providing more freedom while experiencing the premium media services. Silverlight 5 : Business Application Development The application development standard has been extended with more possibilities by bringing forth new and useful technologies and also reviving the existing methods to work better than what it was used to. From the UI improvements to advanced technical aspects, the Silverlight 5 scores high on all grounds to produce great next generation business delivered applications by putting in more creativity and resourceful touch to all the apps being produced with it. - The WPF feature of Silverlight is made more effective by introducing new standards of Databinding which is intended to improve the productivity standards of the Silverlight application developer. It brings in a lot of convenience in debugging the databinding components or expressions and hence making things work in a flawless manner. Some additional features related to databinding includes that of Ancestor RelativeSource, Implicit DataTemplates and Model View ViewModel (MVVM) support with DataContextChanged event and many other new features relating it. - It now comes with a refined text and printing service which facilitates better clarity of the text rendering and also many positive changes which are being applied to the layout pattern. New supports has been added to include OpenType font, multi-column text, linked-text containers and character leading support to name a few among the available features.This also includes some important printing aspects like that of Postscript Vector Printing API which allows to program our printing tasks in a user defined way and Pivot functionality for visualization concerns of informations. - The Graphics support is the key improvements being incorporated which now enables to utilize three dimensional graphics pattern using GPU acceleration. It can manage to provide some really cool visualizations being curved to provide media contents within the business apps with also the support for full HD contents at 1080p quality. - Silverlight 5 includes the support for 64-bit operating systems and relevant browsers and is also optimized to provide better performance. It can support the background thread for the networking which can reduce the latency of the network to a considerable extent. The Out-of-Browser functionality adds the support for utilizing various libraries and also the Win32 API. It also comes with testing support with VS 2010 which is mostly an automated procedure and has also enabled increased security aspects of all the Silverlight 5 developed applications by using the improved version of group policy support.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >