Search Results

Search found 4989 results on 200 pages for 'svn merge'.

Page 187/200 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Apache proxy pass in nginx

    - by summerbulb
    I have the following configuration in Apache: RewriteEngine On #APP ProxyPass /abc/ http://remote.com/abc/ ProxyPassReverse /abc/ http://remote.com/abc/ #APP2 ProxyPass /efg/ http://remote.com/efg/ ProxyPassReverse /efg/ http://remote.com/efg/ I am trying to have the same configuration in nginx. After reading some links, this is what I have : server { listen 8081; server_name localhost; proxy_redirect http://localhost:8081/ http://remote.com/; location ^~/abc/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/abc/; } location ^~/efg/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/efg/; } } I already have the following configuration: server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } location ^~/myAPP { alias path/to/app; index main.html; } location ^~/myAPP/images { alias another/path/to/images autoindex on; } } The idea here is to overcome a same-origin-policy problem. The main pages are on localhost:8080 but we need ajax calls to http://remote.com/abc. Both domains are under my control. Using the above configuration, the ajax calls either don't reach the remote server or get cut off because of the cross origin. The above solution worked in Apache and isn't working in nginx, so I am assuming it's a configuration problem. I think there is an implicit question here: should I have two server declarations or should I somehow merge them into one? EDIT: Added some more information EDIT2: I've moved all the proxy_pass configuration into the main server declaration and changed all the ajax calls to go through port 8080. I am now getting a new error: 502 Connection reset by peer. Wireshark shows packets going out to http://remote.com with a bad IP header checksum.

    Read the article

  • Apache refusing to change DocumentRoot

    - by mingos
    I've installed Zend Server CE 5.1.0 on Windows 7 Ultimate 64 bit in its default location, meaning the path to my htdocs is C:\Program Files (x86)\Zend\Apache2\htdocs. Not something that I would like to type each time I check out a project from SVN in Eclipse or something. I'd like to set the DocumentRoot to a different folder, namely D:\www. What I've done I edited conf/httpd.conf, with the significant lines being: DocumentRoot "D:\www" <Directory "D:\www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> Include conf/extra/httpd-vhosts.conf I edited conf/extra/httpd-vhosts.conf to add a virtual host: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot D:\www ServerName localhost ServerAlias localhost SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN localhost </VirtualHost> <VirtualHost *:80> DocumentRoot D:\www\UmbraCMS ServerName umbracms.local ServerAlias umbracms.local SetEnv APPLICATION_ENV development SetEnv APPLICATION_DOMAIN umbracms.local </VirtualHost> I edited C:\Windows\System32\drivers\etc\hosts to add this line: 127.0.0.1 umbracms.local And I also added a PHP project to D:\www\UmbraCMS. And restarted Apache. Actually, I restarted the computer, too, just in case. What's supposed to happen After typing http://umbracms.local/ in the browser's address bar, I want to see my PHP project launch, obviously. What's actually happening No matter whether whether I type http://umbracms.local/ or http://localhost/, I'm taken to the test zend page, located in C:\Program Files (x86)\Zend\Apache2\htdocs\index.html, as if neither DocumentRoot was changed nor name-based virtual hosting worked. Interestingly, when I put another project in C:\Program Files (x86)\Zend\Apache2\htdocs\bugraid\ and then, in the browser, typed http://localhost/bugraid, the project actually opened, or at least tried to, as it completely ignored the project's .htaccess file. Extra considerations Zend Server's Apache version is 2.2.16, PHP version is 5.3.0 I've installed MySQL CE 5.5.13 separately, and it works, both from command line and via MySQL Workbench. I have XAMPP installed, but none of its components are started up. It's got its own install of Apache 2.2.17 and MySQL 5.5.1. PHP version is 5.3.5 (I think). Question Have you had a similar situation before? What else might need taking care of in order to have Zend Server's Apache use D:\www as document root for my PHP projects?

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • freebsd-update from 8.3-RELEASE to 9.0-RELEASE: How to deal with dozens of diffs?

    - by Stefan Lasiewski
    I am upgrading a FreeBSD 8.3-RELEASE system to FreeBSD 9.0-RELEASE using freebsd-update. This is my first time performing a major version upgrade in FreeBSD. At one point in the process, freebsd-update performs a diff on files which are different then what is expected for the 9.0-RELEASE. It compares the current version on the system with the new changes added from 9.0-RELEASE. There are dozens of files in the list. Thus, I am presented with dozens and dozens of diffs which open in a vi window and look like this: The following file could not be merged automatically: /etc/ntp.conf Press Enter to edit this file in vi and resolve the conflicts manually... ### vi window opens <<<<<<< current version driftfile /etc/ntp/drift ======= # # $FreeBSD: release/9.0.0/etc/ntp.conf 195652 2009-07-13 05:51:33Z dwmalone $ # # Default NTP servers for the FreeBSD operating system. # # Don't forget to enable ntpd in /etc/rc.conf with: # ntpd_enable="YES" # # The driftfile is by default /var/db/ntpd.drift, check # /etc/defaults/rc.conf on how to change the location. # >>>>>>> 9.0-RELEASE restrict default notrust nomodify ignore And so on. This requires that I manually edit each file and remove the strings like <<<<<<< current version >>>>>>> 9.0-RELEASE and =======. As I discovered afterwards, if I don't remove these strings, they end up in the file afterwards. There are dozens of files which differ between 8.3 and 9.0, and I have a dozen local modifications myself. It appears that freebsd-update is using a diff, sdiff or mergemaster function of some sort, but I can't tell what it is doing exactly. Processing these files is tedious. Is there a way that I can just say "Accept new version" or "keep old version" or "Your merge is correct"? There has got to be an easier way to deal with these files. I must be missing something. This isn't a huge problem for one machine, but eventually I'll be doing this dozens of times and I want to find an easier way.

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • Nginx fastcgi problems with django (double slashes in url?)

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody? Notes nginx version: nginx/0.7.62 Django svn trunk rev 13013

    Read the article

  • RDP exits immediately after connecting to Windows Server 2008 R2

    - by carpat
    Background: I recently got a Windows cloud VPS server. I don't have much experience with server admin (I'm a programmer), and what little I do have is with linux servers. Ever since getting the server I've been having issues with RDP. I can connect about two or three times, after which point I can't connect until one of the tech guys "fixes" it (see below). When I connect, I can stay connected for hours with no problem. When the problem connecting starts, the first time I try to log in, the remote desktop window pops up, starts connecting, and then exits with "Your Remote Desktop session has ended". After that, for about 10-20 minutes if I try to connect again, the connections times out with Remote Desktop can't connect to the computer for one of these reasons: 1) Remote access on the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network then goes back to connecting once and immediately disconnecting. All of the updates are installed. The firewall has been correctly configured to let RDP traffic through. The remote setting is "Allow connections from computers running any version of Remote Desktop". I tried creating a second user, and when I can't connect, I can't connect to that user either. I've tried both soft and hard reboots, neither of which help. I've tried connecting from two different computers (both running Windows 7) from two different networks (work and home), and the behavior is the same. Everything else on the server continues to run fine (IIS-served http pages, Tomcat-served java pages, svn, ping). The "fix" that the tech guys supply is simply logging into the console on their end, after which point I can connnect 2 or 3 times again. The event viewer on the server has "authentication failure" (or something similar) events generated when I attempt to log in and can't. I can't get to the actual event at the moment as I'm currently in the can't connect stage, and waiting for the techs to log in. But when I searched for the event earlier this morning I couldn't find anything useful. Can anyone help?

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • Add Your Own Domain to Your WordPress.com Blog

    - by Matthew Guay
    Now that you’ve got a nice blog on WordPress.com, why not get your own domain to brand your site?  Here’s how you can easily register a new domain or move your existing domain to your WordPress site. By default, your free WordPress address is yourblog’sname.wordpress.com.  But whether this is a personal or a company blog, it can be nice to have your own domain to really brand your site and make it your own.  Or, if you already have another website and want to use WordPress as a blog for it, you could even add blog.yoursite.com or any other subdomain. Adding a domain to your WordPress.com is a paid upgrade; registering and mapping a new domain to your account costs $14.97 a year, while mapping a domain you already own to your WordPress blog costs $9.97 a year. Getting Started Login to your blog’s dashboard, click the arrow beside Upgrades in the sidebar, and select Domains. Enter the domain or subdomain you want to add to your site in the text box, and click Add domain to blog.   If you entered a new domain you want to register, WordPress will make sure the domain is available and then present you a registration form to register the domain.  Enter your information, and then click Register Domain.   Or, if you enter a domain that’s already registered, you will see the following prompt. If this domain is a domain you own, you can map it to WordPress.com.  Login to your domain registrar account and switch your nameserver to: NS1.WORDPRESS.COM NS2.WORDPRESS.COM NS3.WORDPRESS.COM Your DNS settings page for your domain may be different, depending on your registrar.  Here’s how our domain settings looked. Alternately, if you’re wanting to map a subdomain, such as blog.yoursite.com to your WordPress blog, create the following CNAME record on your domain register.  You may have to contact your domain registrar’s support to do this.  Substitute your subdomain, domain, and blog name when creating the record. subdomain.yourdomain.com. IN CNAME yourblog.wordpress.com. Once your settings are correct, click Try Again in your WordPress dashboard.  The DNS settings may take a while to update, but once WordPress can tell your DNS settings point to it, you will see the following confirmation screen.  Click Map Domain to add this domain to your WordPress blog. Now you’re ready to pay for your domain mapping or registration.  Depending on your purchase, the information and price shown may be different.  Here we’re mapping a domain we already have registered, so it costs $9.97.  Select your method of payment, enter your payment information or signin with your Paypal account, and continue as usual. Once your purchase is finished, you’ll be returned to the Domains page on WordPress.  Try going to your new domain, and make sure it opens your blog.  If it works, then click the bullet beside the new domain, and click Update Primary Domain.  Now, when people visit your WordPress site, they’ll see your new domain in the address bar.  You can still access your blog from your old yourname.wordpress.com address, but it will redirect to you new domain. Conclusion Having a personalized domain is a great way to make your blog more professional, while still taking advantage of the ease of use that WordPress.com offers.  And, if you have your own domain, you can easily move to your site traffic to a different hosting provider in the future if you need to.  The process is slightly complicated, but for $15/year we found this one of the best upgrades you could do to your WordPress.com blog. If you want to see an example of a site created with Wordpress, check out Matthew’s tech site techinch.com. And, if you’re just getting started with WordPress, check out our series on how to Start your WordPress.com blog, Personalize it, and Easily Post Content to it from anywhere. Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareHow To Start Your Own Professional Blog with WordPressDisable Logon to Windows Computers When Not Connected to a DomainMake a Backup Copy of your Production Wordpress Blog on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • XNA Notes 005

    - by George Clingerman
    Another week and another crazy amount of activity going on in the XNA community. I’m fairly certain I missed over half of it. In fact, if I am missing things, make sure to email me and I’ll try and make sure I catch it next week! ([email protected]). Also, if you’ve got any advice, things you like/don’t like about the way these XNA Notes are going let me know. I always appreciate feedback (currently spammers are leaving me the nicest comments so you guys have work to do!) Without further ado, here’s this week’s notes! Time Critical XNA News The XNA Team Blob reminds us that February 7th is the last day to submit XNA 3.1 games to peer review! http://blogs.msdn.com/b/xna/archive/2011/01/31/7-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx XNA MVPS Chris Williams kicks off the marketing campaign for our book http://geekswithblogs.net/cwilliams/archive/2011/01/28/143680.aspx Catalin Zima posts the comparison cheat sheet for why Angry Birds is different than Chickens Can’t Fly http://www.amusedsloth.com/2011/02/comparison-cheat-sheet-for-chickens-cant-fly-and-angry-birds/ Jim Perry congratulates the developers selected by Game Developer Magazine for Best Xbox LIVE Indie Games of 2010 http://machxgames.com/blog/?p=24 @NemoKrad posts his XNAKUUG talks for all to enjoy http://twitter.com/NemoKrad/statuses/33142362502864896 http://xna-uk.net/blogs/randomchaos/archive/2011/02/03/xblig-uk-2011-january-amp-february-talk.aspx George  (that’s me!) preps for his XNA talk coming up on the 8th http://twitter.com/clingermangw/statuses/32669550554124288 http://www.portlandsilverlight.net/Meetings/Details/15 XNA Developers FireFly posts the last tutorial in his XNA Tower Defense tutorial series http://forums.create.msdn.com/forums/p/26442/451460.aspx#451460 http://xnatd.blogspot.com/2011/01/tutorial-14-polishing-game.html @fredericmy posts the main difference when porting a game from Windows Phone 7 to Xbox 360 http://fairyengine.blogspot.com/2011/01/main-differences-when-porting-game-from.html @ElementCy creates a pretty rad video of a MineCraft type terrain created using XNA http://www.youtube.com/watch?v=Waw1f7wnl9I Andrew Russel gets the first XNA badge on gamedev.stackexchange http://twitter.com/_AndrewRussell/statuses/32322877004972032 http://gamedev.stackexchange.com/badges?tab=tags And his funding for ExEn has passed $7000 only $3000 to go http://twitter.com/_AndrewRussell/statuses/33042412804771840 Subodh Pushpak blogs about his Windows Phone 7 XNA talk http://geekswithblogs.net/subodhnpushpak/archive/2011/02/01/windows-phone-7-silverlight--xna-development-talk.aspx Slyprid releases the latest version of Transmute and needs more people to test http://twitter.com/slyprid/statuses/32452488418299904 http://forgottenstarstudios.com/ SpynDoctorGames celebrates the 1 year anniversary of Your Doodles Are Bugged! Congrats! http://twitter.com/SpynDoctorGames/statuses/32511689068908544 Noogy (creator of Dust the Elysian Tail) prepares his conversion to XNA 4.0 http://twitter.com/NoogyTweet/statuses/32522008449253376 @philippedasilva posts about the Indiefreaks Game Framework v0.2.0.0 Input management system http://twitter.com/philippedasilva/statuses/32763393957957632 http://indiefreaks.com/2011/02/02/behind-smart-input-system-feature/ Mommy’s Best Games debates what to do about High Scores with their new update http://mommysbest.blogspot.com/2011/02/high-score-shake-up.html @BinaryTweedDeej want to know if there’s anything the community needs to make XNA games for the PC. Give him some feedback! http://twitter.com/BinaryTweedDeej/status/32895453863354368 @mikebmcl promises to write us all a book (I can’t wait to read it!) http://twitter.com/mikebmcl/statuses/33206499102687233 @werezompire is going to live, LIVE, thanks to all the generosity and support from the community! http://twitter.com/werezompire/statuses/32840147644977153 Xbox LIVE Indie Games (XBLIG) Making money in Xbox 360 indie game development. Is it possible? http://www.bitmob.com/articles/making-money-in-xbox-360-indie-game-development-is-it-possible @AlejandroDaJ posts some thoughts abut the bitmob article http://twitter.com/AlejandroDaJ/statuses/31068552165330944 http://www.apathyworks.com/blog/view.php?id=00215 Kobun gets my respect as an XBLIG champion. I’m not sure who Kobun is, but if you’ve every read through the comment sections any time Kotaku writes about XBLIGs you’ll see a lot of confusion, disinformation in there. Kobun has been waging a secret war battling that lack of knowledge and he does it well. Also he’s running a pretty kick ass site for Xbox LIVE Indie Game reviews http://xboxindies.teamkobun.com/ @radiangames releases his last Xbox LIVE Indie Game...for now http://bit.ly/gMK6lE Playing Avaglide with the Kinect controller http://www.youtube.com/watch?v=UqAYbHww53o http://www.joystiq.com/2011/01/30/kinect-hacks-take-to-the-skies-with-avaglide/ Luke Schneider of Radiangames interviewed in Edge magazine http://www.next-gen.biz/features/radiangames-venture Digital Quarters posts thoughts on why XBLIG’s online requirement kills certain genres http://digitalquarters.blogspot.com/2011/02/thoughts-why-xbligs-online-requirement.html Mommy’s Best Games shares the news that several XBLIGs were featured in the March 2011 issue of Famitsu 360 http://forums.create.msdn.com/forums/p/33455/451487.aspx#451487 NaviFairy continues with his Indie-Game-A-Day http://gaygamer.net/2011/02/indie_game_a_day_epic_dungeon.html http://gaygamer.net/2011/02/indie_game_a_day_break_limit_r.html and more every day...that’s kind of the point! Keep your eye on this series! VVGTV continues with it’s awesome reviews/promotions for XBLIGs http://vvgtv.com/ http://vvgtv.com/2011/02/03/iredia-atrams-secret-xblig-review-2/ http://vvgtv.com/2011/02/02/poopocalypse-coming-soon-to-xblig/ ….and even more, you get the point. Magicka is an Indie Game doing really well on Steam AND it’s made using XNA http://www.magickagame.com/ http://twitter.com/Magickagame/statuses/32712762580799488 GameMarx reviews Antipole http://www.gamemarx.com/reviews/73/antipole-is-vvvvvvery-good.aspx Armless Octopus review Alpha Squad http://www.armlessoctopus.com/2011/01/28/xbox-indie-review-alpha-squad/ An interesting article about Kodu that Jim Perry found http://twitter.com/MachXGames/statuses/32848044105924608 http://www.develop-online.net/news/36915/10-year-old-Jordan-makes-games-The-UK-needs-more-like-her XNA Game Development Sgt. Conker posts about the Natur beta, a new book and whether you can make money with XBLIG http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/a-new-book-on-the-block-and-a-new-natur-beta/ http://www.sgtconker.com/2011/01/making-money-in-xbox-360-indie-game-development-is-it-possible/ Tips for setting up SVN http://bit.ly/fKxgFh @bsimser found tons of royalty free music and soundfx for your XNA Games http://twitter.com/bsimser/statuses/31426632933711872 Post on the new features in the next Sunburn Editor http://www.synapsegaming.com/blogs/fivesidedbarrel/archive/2011/01/28/new-editor-features-prefabs-components-and-more.aspx @jasons_novaleaf posts source code for light pre-pass optimizations for #xna http://twitter.com/jasons_novaleaf/statuses/33348855403642880 http://jcoluna.wordpress.com/2011/02/01/xna-4-0-light-pre-pass-optimization-round-one/ I’ve been learning about doing an A.I. for turn based games and this article was a great resource. http://www.gamasutra.com/view/feature/1535/designing_ai_algorithms_for_.php?print=1

    Read the article

  • Slide Creation Checklist

    - by Daniel Moth
    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there? Slides are not holders for complete sentences (unless you are quoting) – use Microsoft Word for that purpose either as a physical handout or as a URL link that you share with the audience. When you dry run your presentation, if you find yourself reading the bullets on your slide, you have missed the point. You have a message to deliver that can be done regardless of your slides – remember that. The focus of your audience should be on you, not the screen. Based on that premise, I have created a checklist that I go over before I start a new deck and also once I think my slides are ready. Turn AutoFit OFF. I cannot stress this enough. For each slide, explicitly pick a slide layout. In my presentations, I only use one Title Slide, Section Header per demo slide, and for the rest of my slides one of the three: Title and Content, Title Only, Blank. Most people that are newbies to PowerPoint, get whatever default layout the New Slide creates for them and then start deleting and adding placeholders to that. You can do better than that (and you'll be glad you did if you also follow item #11 below). Every slide must have an image. Remove all punctuation (e.g. periods, commas) other than exclamation points and question marks (! ?). Don't use color or other formatting (e.g. italics, bold) for text on the slide. Check your animations. Avoid animations that hide elements that were on the slide (instead use a new slide and transition). Ensure that animations that bring new elements in, bring them into white space instead of over other existing elements. A good test is to print the slide and see that it still makes sense even without the animation. Print the deck in black and white choosing the "6 slides per page" option. Can I still read each slide without losing any information? If the answer is "no", go back and fix the slides so the answer becomes "yes". Don't have more than 3 bullet levels/indents. In other words: you type some text on the slide, hit 'Enter', hit 'Tab', type some more text and repeat at most one final time that sequence. Ideally your outer bullets have only level of sub-bullets (i.e. one level of indentation beneath them). Don't have more than 3-5 outer bullets per slide. Space them evenly horizontally, e.g. with blank lines in between. Don't wrap. For each bullet on all slides check: does the text for that bullet wrap to a second line? If it does, change the wording so it doesn't. Or create a terser bullet and make the original long text a sub-bullet of that one (thus decreasing the font size, but still being consistent) and have no wrapping. Use the same consistent fonts (i.e. Font Face, Font Size etc) throughout the deck for each level of bullet. In other words, don't deviate form the PowerPoint template you chose (or that was chosen for you). Go on each slide and hit 'Reset'. 'Reset' is a button on the 'Home' tab of the ribbon or you can find the 'Reset Slide' menu when you right click on a slide on the left 'Slides' list. If your slides can survive doing that without you "fixing" things after the Reset action, you are golden! For each slide ask yourself: if I had to replace this slide with a single sentence that conveys the key message, what would that sentence be? This exercise leads you to merge slides (where the key message is split) or split a slide into many, if there were too many key messages on the slide in the first place. It can also lead you to redesign a slide so the text on it really is just explanation or evidence for the key message you are trying to convey. Get the length right. Is the length of this deck suitable for the time you have been given to present? If not, cut content! It is far better to deliver less in a relaxed, polished engaging, memorable way than to deliver in great haste more content. As a rule of thumb, multiply 2 minutes by the number of slides you have, add the time you need for each demo and check if that add to more than the time you have allotted. If it does, start cutting content – we've all been there and it has to be done. As always, rules and guidelines are there to be bent and even broken some times. Start with the above and on a slide-by-slide basis decide which rules you want to bend. That is smarter than throwing all the rules out from the start, right? Comments about this post welcome at the original blog.

    Read the article

  • Keep it Professional &ndash; Multiple Environments

    - by AjarnMark
    I have certainly been reading blogs a whole lot more than writing them the last several weeks, and it’s about time I got back to writing.  I have been collecting several topics and references for blog posts…some of which will probably just never get written as the timeliness of the topics fade over time.  Nonetheless, I’m back, and I think it is time to revive my Doing Business Right series, this time coming from the slant of managing a development team rather than the previous angle of being self-employed.  First up: separating Dev, Test, and Prod. A few months ago, Colin Stasiuk (@BenchmarkIT) wrote a great post about separating your Dev, Test/UAT, and Prod environments.  This post covers all the important points such as removing Developer access from both PROD and UAT, and the importance of proper deployment (a.k.a. promotion) procedures.  I won’t repeat it all here, go read the original!  But what I do want to address is what I believe to be the #1 excuse people use for not having separate environments:  Money.  I discussed this briefly in my comment on Colin’s post at the time, but let me repeat it here and expand on it a bit. Don’t let the size of your company or the size of its budget dictate whether you do things professionally or not.  I am convinced that most developers and development teams would agree that it is a best practice to have separate environments for development, testing, and production (a.k.a. Live).  So why don’t they?  Because they think that it means separate servers which means more money.  While having separate physical servers for the different environments would be ideal, it is not an absolute requirement in order to make this work.  Here are a few ideas: Use multiple instances of SQL Server and multiple Web Sites with Headers or Ports.  For no additional fees* you can install multiple instances of SQL Server on the same machine.  This gives you a nice separation, allowing you to even use the same database names as will appear in PROD, yet isolating the data and security access.  And in IIS, you can create multiple Web Sites on the same server just by using Host Headers or different port numbers to separate them.  This approach does still pose the risk of non-Prod environments impacting performance on Prod, but when your application is busy enough for that to be a concern, you can probably afford one of the other options. Use desktop PCs instead of servers.  Instead of investing in full server-grade hardware, you can mimic the separate environments on old desktop PCs and at least get functional equivalency, if not performance matching.  The last I checked, Microsoft did not require separate licensing for SQL Server if that installation was used exclusively for dev or test purposes*.  There may be some version or performance differences between this approach and what you have in Prod, but you have isolated test from impacting Prod resources this way. Virtualization.  This is of course one of the hot topics of the day, and I would be remiss if I did not suggest this.  It is quite easy these days to setup virtual machines so that, again, your environments are fairly isolated from one another, and you retain all the security and procedural benefits of having separate environments. So the point is, keep your high professional standards intact.  You don’t need to compromise on using proper procedure just because you work in a small company with a small budget.  Keep doing things the right way! By the way, where I work, our DEV environment is not on a server.  All development is done on the developer’s individual workstation where it can be isolated from other developers’ work for the duration of writing the code, but also where the developers have to reconcile (merge) differences in code under concurrent development.  This usually means that each change is executed multiple times (once per developer to update their environments with the latest changes from others) giving us an extra, informal. test deployment before even going to the Test/UAT server.  It also means that if the network goes down, the developers can continue to hum along because they are not dependent on networked resources.  In fact, they will likely be even more productive because they aren’t being interrupted by email…but that’s another post I need to write. * I am not a lawyer, nor a licensing specialist, but it appeared to be so the last time I checked.  When in doubt, consult an expert on the topic.

    Read the article

  • Inside Red Gate - Project teams

    - by Simon Cooper
    Within each division in Red Gate, development effort is structured around one or more project teams; currently, each division contains 2-3 separate teams. These are self contained units responsible for a particular development project. Project team structure The typical size of a development team varies, but is normally around 4-7 people - one project manager, two developers, one or two testers, a technical author (who is responsible for the text within the application, website content, and help documentation) and a user experience designer (who designs and prototypes the UIs) . However, team sizes can vary from 3 up to 12, depending on the division and project. As an rule, all the team sits together in the same area of the office. (Again, this is my experience of what happens. I haven't worked in the DBA division, and SQL Tools might have changed completely since I moved to .NET. As I mentioned in my previous post, each division is free to structure itself as it sees fit.) Depending on the project, and the other needs in the division, the tech author and UX designer may be shared between several projects. Generally, developers and testers work on one project at a time. If the project is a simple point release, then it might not need a UX designer at all. However, if it's a brand new product, then a UX designer and tech author will be involved right from the start. Developers, testers, and the project manager will normally stay together in the same team as they work on different projects, unless there's a good reason to split or merge teams for a particular project. Technical authors and UX designers will normally go wherever they are needed in the division, depending on what each project needs at the time. In my case, I was working with more or less the same people for over 2 years, all the way through SQL Compare 7, 8, and Schema Compare for Oracle. This helped to build a great sense of camaraderie wihin the team, and helped to form and maintain a team identity. This, in turn, meant we worked very well together, and so the final result was that much better (as well as making the work more fun). How is a project started and run? The product manager within each division collates user feedback and ideas, does lots of research, throws in a few ideas from people within the company, and then comes up with a list of what the division should work on in the next few years. This is split up into projects, and after each project is greenlit (I'll be discussing this later on) it is then assigned to a project team, as and when they become available (I'm sure there's lots of discussions and meetings at this point that I'm not aware of!). From that point, it's entirely up to the project team. Just as divisions are autonomous, project teams are also given a high degree of autonomy. All the teams in Red Gate use some sort of vaguely agile methodology; most use some variations on SCRUM, some have experimented with Kanban. Some store the project progress on a whiteboard, some use our bug tracker, others use different methods. It all depends on what the team members think will work best for them to get the best result at the end. From that point, the project proceeds as you would expect; code gets written, tests pass and fail, discussions about how to resolve various problems are had and decided upon, and out pops a new product, new point release, new internal tool, or whatever the project's goal was. The project manager ensures that everyone works together without too much bloodshed and that thrown missiles are constrained to Nerf bullets, the developers write the code, the testers ensure it actually works, and the tech author and UX designer ensure that people will be able to use the final product to solve their problem (after all, developers make lousy UI designers and technical authors). Projects in Red Gate last a relatively short amount of time; most projects are less than 6 months. The longest was 18 months. This has evolved as the company has grown, and I suspect is a side effect of the type of software Red Gate produces. As an ISV, we sell packaged software; we only get revenue when customers purchase the ready-made tools. As a result, we only get a sellable piece of software right at the end of a project. Therefore, the longer the project lasts, the more time and money has to be invested by the company before we get any revenue from it, and the riskier the project becomes. This drives the average project time down. Small project teams are the core of how Red Gate produces software, and are what the whole development effort of the company is built around. In my next post, I'll be looking at the office itself, and how all 200 of us manage to fit on two floors of a small office building.

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • ???? Oracle11g ????????? No.2 - v$database.CURRENT_SCN

    - by Todd Bao
    «????Oracle 11g ???????»???????????,?11.2.0.3.0?????: select current_scn from v$database union all select current_scn from v$database; ??????????SCN,??????11.2.0.1.0???????????SCN?????? ??,????11.2.0.1.0????,11.2.0.3.0????X$KCCDI(V$DATABASE?????,??CURRENT_SCN??)??,?????????SCN? ----------------------------------------------------| Id  | Operation            | Name               |----------------------------------------------------|   0 | SELECT STATEMENT     |                    ||   1 |  MERGE JOIN CARTESIAN|                    ||*  2 |   FIXED TABLE FULL   | X$KCCDI            ||   3 |   BUFFER SORT        |                    ||   4 |    VIEW              | VW_JF_SET$6E0AEE5B ||   5 |     UNION-ALL        |                    ||   6 |      FIXED TABLE FULL| X$KCCDI2           ||   7 |      FIXED TABLE FULL| X$KCCDI2           |---------------------------------------------------- ??????11.2.0.3.0???????SQL??v$database????current_scn????????:???????X$KCCDI???dicur_scn(current_scn)??????? a. ???:????union all,???????,??????????X$KCCDI2(V$DATABASE??????)?VIEW????,??X$KCCDI?X$KCCDI2????,???X$KCCDI??,??: SYS@fmw//Scripts> run  1  select current_scn from v$database  2  union all select current_scn from v$database  3  union all select current_scn from v$database  4* union all select current_scn from v$databaseCURRENT_SCN-----------    5074384    5074385    5074385    50743854 rows selected. ??,X$KCCDI?????????,??????????SCN??????SCN????????“?”SCN? b. ???:???????,??: SYS@fmw//Scripts> run  1  select current_scn,status from v$database,v$instance  2  union all  3* select current_scn,status from v$database,v$instanceCURRENT_SCN + STATUS----------- + ------------------------    5075463 + OPEN    5075464 + OPEN2 rows selected. c. ???:?????????: SYS@fmw//Scripts> run  1* select a.current_scn,b.current_scn from v$database a,v$database bCURRENT_SCN + CURRENT_SCN----------- + -----------    5078328 +     50783291 row selected. ????UNION ALL?????? d. ??,???X$KCCDI??????????????????“??”??=D,????????X$?????????$???,???????,????V$DATABASE?????????????????: SYS@fmw//Scripts> run  1  select dicur_scn from x$kccdi  2* union all select dicur_scn from x$kccdiDICUR_SCN--------------------------------508218350821842 rows selected. SYS@fmw//Scripts> run  1* select a.dicur_scn,b.dicur_scn from x$kccdi a,x$kccdi bDICUR_SCN                        + DICUR_SCN-------------------------------- + --------------------------------5082913                          + 50829141 row selected. ??? Todd Bao ??,???????????,?????????SCN,????V$DATABASE.CURRENT_SCN?,???????“next scn”? ×??,???demo????11.2.0.3.???

    Read the article

  • Get Local IP-Address using Boost.Asio

    - by MOnsDaR
    Hey, I'm currently searching for a portable way of getting the local IP-addresses. Because I'm using Boost anyway I thought it would be a good idea to use Boost.Asio for this task. There are serveral examples on the net which should do the trick. Examples: Official Boost.Asio Documentation Some Asian Page I tried both codes with just slight modifications. The Code on Boost.Doc was changed to not resolve "www.boost.org" but "localhost" or my hostname instead. For getting the hostname I used boost::asio::ip::host_name() or typed it directly as a string. Additionally I wrote my own code which was a merge of the above examples and my (little) knowledge I gathered from the Boost Documentation and other examples. All the sources worked, but they did just return the following IP: 127.0.1.1 (Thats not a typo, its .1.1 at the end) I run and compiled the code on Ubuntu 9.10 with GCC 4.4.1 A colleague tried the same code on his machine and got 127.0.0.2 (Not a typo too...) He compiled and run on Suse 11.0 with GCC 4.4.1 (I'm not 100% sure) I don't know if it is possible to change the localhost (127.0.0.1), but I know that neither me or my colleague did it. ifconfig says loopback uses 127.0.0.1. ifconfig also finds the public IP I am searching for (141.200.182.30 in my case, subnet is 255.255.0.0) So is this a Linux-issue and the code is not as portable as I thought? Do I have to change something else or is Boost.Asio not working as a solution for my problem at all? I know there are much questions about similar topics on Stackoverflow and other pages, but I cannot find information which is useful in my case. If you got useful links, it would be nice if you could point me to it. Thanks in advance, MOnsDaR PS: Here is the modified code I used from Boost.Doc: #include <boost/asio.hpp> using boost::asio::ip::tcp; boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(boost::asio::ip::host_name(), ""); tcp::resolver::iterator iter = resolver.resolve(query); tcp::resolver::iterator end; // End marker. while (iter != end) { tcp::endpoint ep = *iter++; std::cout << ep << std::endl; }

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >