Search Results

Search found 978 results on 40 pages for 'nobody'.

Page 38/40 | < Previous Page | 34 35 36 37 38 39 40  | Next Page >

  • Ubuntu error: no candidate ver

    - by codedude
    Ok...so for some reason by Ubuntu 10.04 partition won't startup anymore. I'm not exactly sure what I did. I have the feeling that I accidentally removed some important packages. However, when I startup in Recovery Mode and select "dpkg: repair broken packages" I get this output: rm: cannot remove /var/lib/apt/lists/partial/*': No such file or directory rm: cannot remove/var/cache/apt/archives/partial/*': No such file or directory Reading Cache Reading Package lists: Done Reading State Information: Done Reading State Information: Done Reading State Information: Done No candidate ver: libisc44 No candidate ver: readahead No candidate ver: libdirectfb-1.0-0 No candidate ver: upstart-logd No candidate ver: libparted1.8-10 No candidate ver: libflickrnet2.1.5-cil No candidate ver: libicu38 No candidate ver: fast-user-switch-applet No candidate ver: linux-restricted-modules-2.6.28-11-generic No candidate ver: libnm-util0 No candidate ver: libwebkit-1.0-1 No candidate ver: libopal3.6.1 No candidate ver: libbind9-40 No candidate ver: linux-restricted-modules-common No candidate ver: upstart-compat-sysv No candidate ver: linux-restricted-modules-2.6.28-13-generic No candidate ver: chromium-browser No candidate ver: libisc45 No candidate ver: belocs-locales-bin No candidate ver: libxcb-xlib0 No candidate ver: libpoppler4 No candidate ver: libpulsecore9 No candidate ver: libpulsecore5 No candidate ver: linux-image-2.6.28-15-generic No candidate ver: libntfs-3g49 No candidate ver: libisccfg40 No candidate ver: libavahi-core5 No candidate ver: linux-restricted-modules-2.6.27-7-generic No candidate ver: libparted1.8-9 No candidate ver: libmagickwand1 No candidate ver: libwvstreams4.4-base No candidate ver: linux-image-2.6.28-14-generic No candidate ver: libraw1394-8 No candidate ver: libdatrie0 No candidate ver: libboost-program-options1.35.0 No candidate ver: libgnomekbdui3 No candidate ver: libsgutils1 No candidate ver: libass1 No candidate ver: linux-restricted-modules-2.6.28-15-generic No candidate ver: libindicate1 No candidate ver: libgnomekbd3 No candidate ver: system-services No candidate ver: linux-image-2.6.28-13-generic No candidate ver: libmagick10 No candidate ver: libcolamd-3.2.0 No candidate ver: desktop-effects-kde No candidate ver: picasa No candidate ver: libgnome-desktop-2-7 No candidate ver: libntfs-3g28 No candidate ver: libxklavier12 No candidate ver: libgpod3 No candidate ver: libmagickcore1 No candidate ver: libffado0 No candidate ver: libzephyr3 No candidate ver: libuniconf4.4 No candidate ver: libmbca0 No candidate ver: liblwres40 No candidate ver: hotkey-setup No candidate ver: libkrb53 No candidate ver: ttf-bitstream-vera No candidate ver: libee12-2 No candidate ver: libopal-2.2 No candidate ver: songbird No candidate ver: plib1.8.4c2 No candidate ver: linux-image-2.6.27-7-generic No candidate ver: libpoppler3 No candidate ver: google-chrome-unstable No candidate ver: linux-image-2.6.28-11-generic No candidate ver: linux-restricted-modules-2.6.28-14-generic No candidate ver: simgear1.0.0 No candidate ver: libpoppler-glib3 No candidate ver: mono-common No candidate ver: libx264-65 No candidate ver: libwvstreams4.4-extras No candidate ver: libdns43 No candidate ver: libnm-glib0 No candidate ver: libpt2.6.1 No candidate ver: libisccc40 No candidate ver: libdns45 No candidate ver: libisc44 No candidate ver: readahead No candidate ver: libdirectfb-1.0-0 No candidate ver: upstart-logd No candidate ver: libparted1.8-10 No candidate ver: libflickrnet2.1.5-cil No candidate ver: libicu38 No candidate ver: fast-user-switch-applet No candidate ver: linux-restricted-modules-2.6.28-11-generic No candidate ver: libnm-util0 No candidate ver: libwebkit-1.0-1 No candidate ver: libopal3.6.1 No candidate ver: libbind9-40 No candidate ver: linux-restricted-modules-common No candidate ver: upstart-compat-sysv No candidate ver: linux-restricted-modules-2.6.28-13-generic No candidate ver: chromium-browser No candidate ver: libisc45 No candidate ver: belocs-locales-bin No candidate ver: libxcb-xlib0 No candidate ver: libpoppler4 No candidate ver: libpulsecore9 No candidate ver: lipulsecore5 No candidate ver: linux-image-2.6.28-15-generic No candidate ver: libntfs-3g49 No candidate ver: libisccfg40 No candidate ver: libavahi-core5 No candidate ver: linux-restricted-modules-2.6.27-7-generic No candidate ver: libparted1.8-9 No candidate ver: libmagickwand1 No candidate ver: linux-image-2.6.28-14-generic No candidate ver: libraw1394-8 No candidate ver: libdatrie0 No candidate ver: libboost-program-options1.35.0 No candidate ver: libgnomekbdui3 No candidate ver: libsgutils1 No candidate ver: libass1 No candidate ver: linux-restricted-modules-2.6.28-15-generic No candidate ver: libindicate1 No candidate ver: libgnomekbd3 No candidate ver: system-services No candidate ver: linux-image-2.6.28-13-generic No candidate ver: libmagick10 No candidate ver: libcolamd-3.2.0 No candidate ver: desktop-effects-kde No candidate ver: picasa No candidate ver: libgnome-desktop-2-7 No candidate ver: libntfs-3g28 No candidate ver: libxklavier12 No candidate ver: libgpod3 No candidate ver: libmagickcore1 No candidate ver: libffado0 No candidate ver: libzephyr3 No candidate ver: libuniconf4.4 No candidate ver: libmbca0 No candidate ver: liblwres40 No candidate ver: hotkey-setup No candidate ver: libkrb53 No candidate ver: ttf-bitstream-vera No candidate ver: libeel2-2 No candidate ver: libopal-2.2 No candidate ver: songbird No candidate ver: plib1.8.4c2 No candidate ver: linux-image-2.6.27-7-generic No candidate ver: libpoppler3 No candidate ver: google-chrome-unstable No candidate ver: linux-image-2.6.28-11-generic No candidate ver: linux-restricted-modules-2.6.28-14-generic No candidate ver: simgear1.0.0 No candidate ver: libpoppler-glib3 No candidate ver: mono-common No candidate ver: libx264-65 No candidate ver: libwvstreams4.4-extras No candidate ver: libdns43 No candidate ver: libnm-glib0 No candidate ver: libpt2.6.1 No candidate ver: libisccc40 No candidate ver: libdns45 Your system is up-to-date There are no upgrades available for your system. The upgrade will now be canceled. Do you want to start the upgrdae? Continue [yN] Details [d] (Sorry that is so long...) I'm assuming I'm missing each of those packages. However, when I try to install them, for example: sudo apt-get install libisc44 I get some response that this package is missing and/or it may be part of another package. I'm not exactly sure what I should do. I don't want to reinstall ubuntu, since I have it dual booted with Windows Vista and I don't want to risk anything on the Windows partition. What are your thoughts? EDIT: I've posted this on Ubuntu Forums and nobody ever responded so...yah

    Read the article

  • Ubuntu error: no candidate ver

    - by codedude
    Ok...so for some reason by Ubuntu 10.04 partition won't startup anymore. I'm not exactly sure what I did. I have the feeling that I accidentally removed some important packages. However, when I startup in Recovery Mode and select "dpkg: repair broken packages" I get this output: rm: cannot remove /var/lib/apt/lists/partial/*': No such file or directory rm: cannot remove/var/cache/apt/archives/partial/*': No such file or directory Reading Cache Reading Package lists: Done Reading State Information: Done Reading State Information: Done Reading State Information: Done No candidate ver: libisc44 No candidate ver: readahead No candidate ver: libdirectfb-1.0-0 No candidate ver: upstart-logd No candidate ver: libparted1.8-10 No candidate ver: libflickrnet2.1.5-cil No candidate ver: libicu38 No candidate ver: fast-user-switch-applet No candidate ver: linux-restricted-modules-2.6.28-11-generic No candidate ver: libnm-util0 No candidate ver: libwebkit-1.0-1 No candidate ver: libopal3.6.1 No candidate ver: libbind9-40 No candidate ver: linux-restricted-modules-common No candidate ver: upstart-compat-sysv No candidate ver: linux-restricted-modules-2.6.28-13-generic No candidate ver: chromium-browser No candidate ver: libisc45 No candidate ver: belocs-locales-bin No candidate ver: libxcb-xlib0 No candidate ver: libpoppler4 No candidate ver: libpulsecore9 No candidate ver: libpulsecore5 No candidate ver: linux-image-2.6.28-15-generic No candidate ver: libntfs-3g49 No candidate ver: libisccfg40 No candidate ver: libavahi-core5 No candidate ver: linux-restricted-modules-2.6.27-7-generic No candidate ver: libparted1.8-9 No candidate ver: libmagickwand1 No candidate ver: libwvstreams4.4-base No candidate ver: linux-image-2.6.28-14-generic No candidate ver: libraw1394-8 No candidate ver: libdatrie0 No candidate ver: libboost-program-options1.35.0 No candidate ver: libgnomekbdui3 No candidate ver: libsgutils1 No candidate ver: libass1 No candidate ver: linux-restricted-modules-2.6.28-15-generic No candidate ver: libindicate1 No candidate ver: libgnomekbd3 No candidate ver: system-services No candidate ver: linux-image-2.6.28-13-generic No candidate ver: libmagick10 No candidate ver: libcolamd-3.2.0 No candidate ver: desktop-effects-kde No candidate ver: picasa No candidate ver: libgnome-desktop-2-7 No candidate ver: libntfs-3g28 No candidate ver: libxklavier12 No candidate ver: libgpod3 No candidate ver: libmagickcore1 No candidate ver: libffado0 No candidate ver: libzephyr3 No candidate ver: libuniconf4.4 No candidate ver: libmbca0 No candidate ver: liblwres40 No candidate ver: hotkey-setup No candidate ver: libkrb53 No candidate ver: ttf-bitstream-vera No candidate ver: libee12-2 No candidate ver: libopal-2.2 No candidate ver: songbird No candidate ver: plib1.8.4c2 No candidate ver: linux-image-2.6.27-7-generic No candidate ver: libpoppler3 No candidate ver: google-chrome-unstable No candidate ver: linux-image-2.6.28-11-generic No candidate ver: linux-restricted-modules-2.6.28-14-generic No candidate ver: simgear1.0.0 No candidate ver: libpoppler-glib3 No candidate ver: mono-common No candidate ver: libx264-65 No candidate ver: libwvstreams4.4-extras No candidate ver: libdns43 No candidate ver: libnm-glib0 No candidate ver: libpt2.6.1 No candidate ver: libisccc40 No candidate ver: libdns45 No candidate ver: libisc44 No candidate ver: readahead No candidate ver: libdirectfb-1.0-0 No candidate ver: upstart-logd No candidate ver: libparted1.8-10 No candidate ver: libflickrnet2.1.5-cil No candidate ver: libicu38 No candidate ver: fast-user-switch-applet No candidate ver: linux-restricted-modules-2.6.28-11-generic No candidate ver: libnm-util0 No candidate ver: libwebkit-1.0-1 No candidate ver: libopal3.6.1 No candidate ver: libbind9-40 No candidate ver: linux-restricted-modules-common No candidate ver: upstart-compat-sysv No candidate ver: linux-restricted-modules-2.6.28-13-generic No candidate ver: chromium-browser No candidate ver: libisc45 No candidate ver: belocs-locales-bin No candidate ver: libxcb-xlib0 No candidate ver: libpoppler4 No candidate ver: libpulsecore9 No candidate ver: lipulsecore5 No candidate ver: linux-image-2.6.28-15-generic No candidate ver: libntfs-3g49 No candidate ver: libisccfg40 No candidate ver: libavahi-core5 No candidate ver: linux-restricted-modules-2.6.27-7-generic No candidate ver: libparted1.8-9 No candidate ver: libmagickwand1 No candidate ver: linux-image-2.6.28-14-generic No candidate ver: libraw1394-8 No candidate ver: libdatrie0 No candidate ver: libboost-program-options1.35.0 No candidate ver: libgnomekbdui3 No candidate ver: libsgutils1 No candidate ver: libass1 No candidate ver: linux-restricted-modules-2.6.28-15-generic No candidate ver: libindicate1 No candidate ver: libgnomekbd3 No candidate ver: system-services No candidate ver: linux-image-2.6.28-13-generic No candidate ver: libmagick10 No candidate ver: libcolamd-3.2.0 No candidate ver: desktop-effects-kde No candidate ver: picasa No candidate ver: libgnome-desktop-2-7 No candidate ver: libntfs-3g28 No candidate ver: libxklavier12 No candidate ver: libgpod3 No candidate ver: libmagickcore1 No candidate ver: libffado0 No candidate ver: libzephyr3 No candidate ver: libuniconf4.4 No candidate ver: libmbca0 No candidate ver: liblwres40 No candidate ver: hotkey-setup No candidate ver: libkrb53 No candidate ver: ttf-bitstream-vera No candidate ver: libeel2-2 No candidate ver: libopal-2.2 No candidate ver: songbird No candidate ver: plib1.8.4c2 No candidate ver: linux-image-2.6.27-7-generic No candidate ver: libpoppler3 No candidate ver: google-chrome-unstable No candidate ver: linux-image-2.6.28-11-generic No candidate ver: linux-restricted-modules-2.6.28-14-generic No candidate ver: simgear1.0.0 No candidate ver: libpoppler-glib3 No candidate ver: mono-common No candidate ver: libx264-65 No candidate ver: libwvstreams4.4-extras No candidate ver: libdns43 No candidate ver: libnm-glib0 No candidate ver: libpt2.6.1 No candidate ver: libisccc40 No candidate ver: libdns45 Your system is up-to-date There are no upgrades available for your system. The upgrade will now be canceled. Do you want to start the upgrdae? Continue [yN] Details [d] (Sorry that is so long...) I'm assuming I'm missing each of those packages. However, when I try to install them, for example: sudo apt-get install libisc44 I get some response that this package is missing and/or it may be part of another package. I'm not exactly sure what I should do. I don't want to reinstall ubuntu, since I have it dual booted with Windows Vista and I don't want to risk anything on the Windows partition. What are your thoughts? EDIT: I've posted this on Ubuntu Forums and nobody ever responded so...yah

    Read the article

  • Nginx reverse proxy IP issue

    - by Tiffany Walker
    For some reason Apache is still seeing my SERVERS ip. Is this an nginx problem? /etc/nginx.conf user nobody; # no need for more workers in the proxy mode worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 5120; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; # You can remove image/png image/x-icon image/gif image/jpeg if you have slow CPU gzip_types text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; vhost file: server { error_log /var/log/nginx/vhost-error_log warn; listen 63.6.1.12:80; server_name photo-rolldomain.com www.domain.com; access_log /usr/local/apache/domlogs/domain.com-bytes_log bytes_log; access_log /usr/local/apache/domlogs/domain.com combined; root /home/mtech/public_html; location / { location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|html|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { expires 7d; try_files $uri @backend; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location @backend { internal; proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { proxy_pass http://63.6.1.12:8081; include proxy.inc; } location ~ /\.ht { deny all; } }

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • OpenVPN Configuration - Windows 7 client & debian server

    - by Guillaume
    I recently formatted my Windows 7 computer and lost my client's config files for OpenVPN. I recovered the certificates and default config that were left on the server but I haven't managed to make the whole thing work again. I assume the server's config and routing table are OK because it was working before (although quite some time ago). Would any of you experts be able to help? server.conf # Serveur TCP/666 mode server proto udp port 666 dev tun # Cles et certificats ca ca.crt cert server.crt key server.key dh dh1024.pem tls-auth ta.key 0 cipher AES-256-CBC # Reseau server 10.8.0.0 255.255.255.0 #push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" push "redirect-gateway def1" keepalive 10 120 # Securite user nobody group nogroup chroot /etc/openvpn/jail persist-key persist-tun comp-lzo # Log verb 3 mute 20 status openvpn-status.log log-append /var/log/openvpn.log client.conf # Client client dev tun proto udp remote *my server's ip address*:666 cipher AES-256-CBC # Cles ca ca.crt cert client1.crt key client1.key tls-auth ta.key 1 # Securite nobind persist-key persist-tun comp-lzo verb 3 Routing table on debian server when OpenVPN server is running: Destination Gateway Genmask Indic Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 my server's ip * 255.255.255.0 U 0 0 0 eth0 default 72815.trg.dedic 0.0.0.0 UG 0 0 0 eth0 Routing table on Windows 7 client (OpenVPN not working) =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: [...] =========================================================================== Persistent Routes: None And when the link is established between my client and the server: The server's routing table stays the same. The client's becomes: =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 my server's ip 255.255.255.255 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Persistent Routes: None What's working: Server and client do connect to each other, SSL certificates are OK. The client gets an IP (10.8.0.6) from the server OpenVPN client is started as an administrator. But: I cannot ping the other one on either side. 'Gateway' value is empty on client's side (in the adapter's "status" window). Client has got no internet access when the link is up. Ideal configuration: I only want the client to be able to use the server's Internet access and access its resources (MySQL server in particular). I do not need or want the server to access the client's local network. The client needs to be able to access it's local network, although all Internet traffic should be redirected to the VPN link. I spent a considerable amount of time on this but it's still not working, any help would be much appreciated. Thanks :)

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • Streaming a webcam from Silverlight 4 (Beta)

    - by Ken Smith
    The new webcam stuff in Silverlight 4 is darned cool. By exposing it as a brush, it allows scenarios that are way beyond anything that Flash has. At the same time, accessing the webcam locally seems like it's only half the story. Nobody buys a webcam so they can take pictures of themselves and make funny faces out of them. They buy a webcam because they want other people to see the resulting video stream, i.e., they want to stream that video out to the Internet, a lay Skype or any of the dozens of other video chat sites/applications. And so far, I haven't figured out how to do that with It turns out that it's pretty simple to get a hold of the raw (Format32bppArgb formatted) bytestream, as demonstrated here. But unless we want to transmit that raw bytestream to a server (which would chew up way too much bandwidth), we need to encode that in some fashion. And that's more complicated. MS has implemented several codecs in Silverlight, but so far as I can tell, they're all focused on decoding a video stream, not encoding it in the first place. And that's apart from the fact that I can't figure out how to get direct access to, say, the H.264 codec in the first place. There are a ton of open-source codecs (for instance, in the ffmpeg project here), but they're all written in C, and they don't look easy to port to C#. Unless translating 10000+ lines of code that look like this is your idea of fun :-) const int b_xy= h->mb2b_xy[left_xy[i]] + 3; const int b8_xy= h->mb2b8_xy[left_xy[i]] + 1; *(uint32_t*)h->mv_cache[list][cache_idx ]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[0+i*2]]; *(uint32_t*)h->mv_cache[list][cache_idx+8]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[1+i*2]]; h->ref_cache[list][cache_idx ]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[0+i*2]>>1)]; h->ref_cache[list][cache_idx+8]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[1+i*2]>>1)]; The mooncodecs folder within the Mono project (here) has several audio codecs in C# (ADPCM and Ogg Vorbis), and one video codec (Dirac), but they all seem to implement just the decode portion of their respective formats, as do the java implementations from which they were ported. I found a C# codec for Ogg Theora (csTheora, http://www.wreckedgames.com/forum/index.php?topic=1053.0), but again, it's decode only, as is the jheora codec on which it's based. Of course, it would presumably be easier to port a codec from Java than from C or C++, but the only java video codecs that I found were decode-only (such as jheora, or jirac). So I'm kinda back at square one. It looks like our options for hooking up a webcam (or microphone) through Silverlight to the Internet are: (1) Wait for Microsoft to provide some guidance on this; (2) Spend the brain cycles porting one of the C or C++ codecs over to Silverlight-compatible C#; (3) Send the raw, uncompressed bytestream up to a server (or perhaps compressed slightly with something like zlib), and then encode it server-side; or (4) Wait for someone smarter than me to figure this out and provide a solution. Does anybody else have any better guidance? Have I missed something that's just blindingly obvious to everyone else? (For instance, does Silverlight 4 somewhere have some classes I've missed that take care of this?)

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • Cryptographic Validation Explained

    - by MarkPearl
    We have been using LogicNP’s CryptoLicensing for some of our software and I was battling to understand how exactly the whole process worked. I was sent the following document which really helped explain it – so if you ever use the same tool it is well worth a read. Licensing Basics LogicNP CryptoLicensing For .Net is the most advanced and state-of-the art licensing and copy protection system you can use for your software. LogicNP CryptoLicensing System uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA algorithm which consists of a pair of keys called as the generation key and the validation key. Data encrypted using the generation key can only be decrypted using the corresponding validation key. How does cryptographic validation work? When a new license project is created, a unique validation-generation key pair is created for the project. When LogicNP CryptoLicensing For .Net generates licenses, it encrypts the license settings using the generation key. The validation key can be safely distributed with your software and is used during validation. During license validation, LogicNP CryptoLicensing For .Net attempts to decrypt the encrypted license code using the validation key. If the decryption is successful, this means that the data was encrypted using the generation key, since only the corresponding validation key can decrypt data encrypted with the generation key. This further means that not only is the license valid but that it was generated by you and only you since nobody else has access to the generation key. Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Note that the generation key pair is stored in the project file created by LogicNP CryptoLicensing For .Net, so it is very important to backup this file and to keep it secure. Once the file is lost, it is not possible to retrieve the key pair. FAQ Do I use the same validation key to validate all license codes? Yes, the validation key (and generation key) for the project remains the same; you use the same key to validate all license codes generated using the project. You can retrieve the validation key using the "Project" menu --> "Get Validation Key & Code" menu item. Can license codes generated using generation key from one project be validated using validation key of another project? No! Q. Is every generated license code unique? A. Yes, every license code generated by CryptoLicensing is guaranteed to be unique, even if you generate thousands of codes at a time. Q. What makes CryptoLicensing so secure? A. CryptoLicensing uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA asymmetric key algorithm which can use upto 3072-bit keys. Given current computing power, it takes years to break a 3072-bit key. Q. Is is possible for a hacker to develop a keygen for my software? A. Impossible. The cryptographic algorithm used by CryptoLicensing consists of a pair of keys called as the generation key and the validation key. Data encrypted with one key can only be decrypted by the other key and vice versa. Licenses are generated using the generation key and validated using the validation key. Without the generation key, it is impossible to generate valid licenses. Q. What is the difference between validation key and generation key? Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Q. Do I have to include the license project file (.licproj) with my software? A. No!!! This goes against the very essence of the security of the asymmetric cryptographic scheme because the project file contains both the validation and generation key. With your software, you only need to include the validation key which will be used to validate licenses generated by CryptoLicensing using the generation key. The license project file should be treated as any other valuable and confidential asset such as your source code. Q. Does the license service need the license project file? A. Yes. The license project file is needed whenever new licenses are generated (via the UI, via the API or via the license service). As just one example, the license service generates new machine-locked licenses when activated licenses are presented to it for activation, therefore the license service needs the license project file. Q. Is it possible to embed my own data in the generated licenses? A. Yes. You can embed any amount of additional data in the licenses. This data will have the same amount of security as the license code itself and will be tamper-proof. The embedded user data can be retrieved from your software. Q. What additional steps can I take to ensure that my software does not get cracked? A. There are many methods and techniques which can make it extremely difficult for a hacker to crack your software. See Writing Effective License Checking Code And Designing Effective Licenses for more information. Q. Why is the license service not working? A. The most common cause is not setting the CryptoLicense.LicenseServiceURL property before trying to validate a license. Make sure that this property is set to the correct URL where your license service is hosted. The most common cause after this is that the license project file on the web server where your license service is hosted is not the latest. This happens if you make changes to the license project (for example, set the 'Enable With Serials' setting for a profile), but don't upload the updated project file to your web server. Q. Why are my serials not working? Serial codes require the user of a license service. See Using Serial Codes for more details. Also see the earlier question 'Why is the license service not working?' Q. Is the same validation key used to validate license codes generated from different profiles. A. Yes. Profiles are just pre specified license settings for quickly generating licenses having those settings. The actual license code is still generated using the license project's cryptographic generation key and thus, can be validated using the project's validation key. Q. Why are changes made to a profile not getting saved? A. Simply changing license settings via UI and saving the license project does not save those license settings to the active profile. You must first save the license settings to a profile using the Save/Save As command from the Profiles menu (see above). Q. Why is validation of activated licenses failing from CryptoLicensing Generator, but works from my software? A. Make sure that you have specified the URL of the license service using the Project Properties Dialog. Also see the earlier question 'Why is the license service not working?' Q. How can I extend the trial period of my customer? A. To extend the evaluation period of the customer, simply send him a new license code specifying the desired evaluation limits. Evaluation information such as the current used days, executions, etc are stored in garbled form in a registry location which is derived from the license code. Therefore, when a new license code is used, the old evaluation information will not be used and a new evaluation period will be started.

    Read the article

  • Dotfuscator Deep Dive with WP7

    - by Bil Simser
    I thought I would share some experience with code obfuscation (specifically the Dotfuscator product) and Windows Phone 7 apps. These days twitter is a buzz with black hat and white operations coming out about how the marketplace is insecure and Microsoft failed, blah, blah, blah. So it’s that much more important to protect your intellectual property. You should protect it no matter what when releasing apps into the wild but more so when someone is paying for them. You want to protect the time and effort that went into your code and have some comfort that the casual hacker isn’t going to usurp your next best thing. Enter code obfuscation. Code obfuscation is one tool that can help protect your IP. Basically it goes into your compiled assemblies, rewrites things at an IL level (like renaming methods and classes and hiding logic flow) and rewrites it back so that the assembly or executable is still fully functional but prying eyes using a tool like ILDASM or Reflector can’t see what’s going on.  You can read more about code obfuscation here on Wikipedia. A word to the wise. Code obfuscation isn’t 100% secure. More so on the WP7 platform where the OS expects certain things to be as they were meant to be. So don’t expect 100% obfuscation of every class and every method and every property. It’s just not going to happen. What this does do is give you some level of protection but don’t put all your eggs in one basket and call it done. Like I said, this is just one step in the process. There are a few tools out there that provide code obfuscation and support the Windows Phone 7 platform (see links to other tools at the end of this post). One such tool is Dotfuscator from PreEmptive solutions. The thing about Dotfuscator is that they’ve struck a deal with Microsoft to provide a *free* copy of their commercial product for Windows Phone 7. The only drawback is that it only runs until March 31, 2010. However it’s a good place to start and the focus of this article. Getting Started When you fire up Dotfuscator you’re presented with a dialog to start a new project or load a previous one. We’ll start with a new project. You’re then looking at a somewhat blank screen that shows an Input tab (among others) and you’re probably wondering what to do? Click on the folder icon (first one) and browse to where your xap file is. At this point you can save the project and click on the arrow to start the process. Bam! You’re done. Right? Think again. The program did indeed run and create a new version of your xap (doing it’s thing and rewriting back your *obfuscated* assemblies) but let’s take a look at the assembly in Reflector to see the end result. Remember a xap file is really just a glorified zip file (or cab file if you prefer). When you ran Dotfuscator for the first time with the default settings you’ll see it created a new version of your xap in a folder under “My Documents” called “Dotfuscated” (you can configure the output directory in settings). Here’s the new xap file. Since a xap is just a zip, rename it to .cab or .zip or something and open it with your favorite unarchive program (I use WinRar but it doesn’t matter as long as it can unzip files). If you already have the xap file associated with your unarchive tool the rename isn’t needed. Once renamed extract the contents of the xap to your hard drive: Now you’ll have a folder with the contents of the xap file extracted: Double click or load up your assembly (WindowsPhoneDataBoundApplication1.dll in the example) in Reflector and let’s see the results: Hmm. That doesn’t look right. I can see all the methods and the code is all there for my LoadData method I wanted to protect. Product failure. Let’s return it for a refund. Hold your horses. We need to check out the settings in the program first. Remember when we loaded up our xap file. It started us on the Input tab but there was a settings tab before that. Wonder what it does? Here’s the default settings: Renaming Taking a closer look, all of the settings in Feature are disabled. WTF? Yeah, it leaves me scratching my head why an obfuscator by default doesn’t obfuscate. However it’s a simple fix to change these settings. Let’s enable Renaming as it sounds like a good start. Renaming obscures code by renaming methods and fields to names that are not understandable. Great. Run the tool again and go through the process of unzipping the updated xap and let’s take a look in Reflector again at our project. This looks a lot better. Lots of methods named a, b, c, d, etc. That’ll help slow hackers down a bit. What about our logic that we spent days weeks on? Let’s take a look at the LoadData method: What gives? We have renaming enabled but all of our code is still there. If you look through all your methods you’ll find it’s still sitting there out in the open. Control Flow Back to the settings page again. Let’s enable Control Flow now. Control Flow obfuscation synthesizes branching, conditional, and iterative constructs (such as if, for, and while) that produce valid executable logic, but yield non-deterministic semantic results when decompilation is attempted. In other words, the code runs as before, but decompilers cannot reproduce the original code. Do the dance again and let’s see the results in Reflector. Ahh, that’s better. Methods renamed *and* nobody can look at our LoadData method now. Life is good. More than Minimum This is the bare minimum to obfuscate your xap to at least a somewhat comfortable level. However I did find that while this worked in my Hello World demo, it didn’t work on one of my real world apps. I had to do some extra tweaking with that. Below are the screens that I used on one app that worked. I’m not sure what it was about the app that the approach above didn’t work with (maybe the extra assembly?) but it works and I’m happy with it. YMMV. Remember to test your obfuscated app on your device first before submitting to ensure you haven’t obfuscated the obfuscator. settings tab: rename tab: string encryption tab: premark tab: A few final notes Play with the settings and keep bumping up the bar to try to get as much obfuscation as you can. The more the better but remember you can overdo it. Always (always, always, always) deploy your obfuscated xap to your device and test it before submitting to the marketplace. I didn’t and got rejected because I had gone overboard with the obfuscation so the app wouldn’t launch at all. Not everything is going to be obfuscated. Specifically I don’t see a way to obfuscate auto properties and a few other language features. Again, if you crank the settings up you might hide these but I haven’t spent a lot of time optimizing the process. Some people might say to obfuscate your xaml using string encryption but again, test, test, test. Xaml is picky so too much obfuscation (or any) might disable your app or produce odd rendering effets. Remember, obfuscation is not 100% secure! Don’t rely on it as a sole way of protecting your assets. Other Tools Dotfuscator is one just product and isn’t the end-all be-all to obfuscation so check out others below. For example, Crypto can make it so Reflector doesn’t even recognize the app as a .NET one and won’t open it. Others can encrypt resources and Xaml markup files. Here are some other obfuscators that support the Windows Phone 7 platform. Feel free to give them a try and let people know your experience with them! Dotfuscator Windows Phone Edition Crypto Obfuscator for .NET DeepSea Obfuscation

    Read the article

  • Looking Back at MIX10

    - by WeigeltRo
    It’s the sad truth of my life that even though I’m fascinated by airplanes and flight in general since my childhood days, my body doesn’t like flying. Even the ridiculously short flights inside Germany are taking their toll on me each time. Now combine this with sitting in the cramped space of economy class for many hours on a transatlantic flight from Germany to Las Vegas and back, and factor in some heavy dose of jet lag (especially on my way eastwards), and you get an idea why after coming back home I had this question on my mind: Was it really worth it to attend MIX10? This of course is a question that will also be asked by my boss at Comma Soft (for other reasons, obviously), who decided to send me and my colleague Jens Schaller, to the MIX10 conference. (A note to my German readers: An dieser Stelle der Hinweis, dass Comma Soft noch Silverlight-Entwickler und/oder UI-Designer für den Standort Bonn sucht – aussagekräftige Bewerbungen bitte an [email protected]) Too keep things short: My answer is yes. Before I’ll go into detail, let me ask the heretical questions whether tech conferences in general still make sense. There was a time, where actually being at a tech conference gave you a head-start in regard to learning about new technologies. Nowadays this is no longer true, where every bit of information and every detail is immediately twittered, blogged and whatevered to death. In the case of MIX10 you even can download the video-taped sessions shortly after. So: Does visiting a conference still make sense? It depends on what you expect from a conference. It should be clear to everybody that you’ll neither get exclusive information, nor receive training in a small group. What a conference does offer that sitting in front of your computer does not can be summarized as follows: Focus Being away from work and home will help you to focus on the presented information. Of course there are always the poor guys who are haunted by their work (with mails and short text messages reporting the latest showstopper problem), but in general being out of your office makes a huge difference. Inspiration With the focus comes the emotional involvement. I find it much easier to absorb information if I feel that certain vibe when sitting in a session. This still means that I have put work into reviewing the information later, but it’s a better starting point. And all the impressions collected at a (good) conference combined lead to a higher motivation – be it by the buzz (“this is gonna be sooo cool!”) or by the fear to fall behind (“man, we’ll have work on this, or else…”). People At a conference it’s pretty easy to get into contact with other people during breakfast, lunch and other breaks. This is a good opportunity to get a feel for what other development teams are doing (on a very general level of course, nobody will tell you about their secret formula) and what they are thinking about specific technologies. So MIX10 did offer focus, inspiration and people, but that would have meant nothing without valuable content. When I (being a frontend developer with a strong interest in UI/UX) planned my visit to MIX10, I made the decision to focus on the "soft" topics of design, interaction and user experience. I figured that I would be bombarded with all the technical details about Silverlight 4 anyway in the weeks and months to come. Actually, I would have liked to catch a few technical sessions, but the agenda wasn’t exactly in favor of people interested in any kind of Silverlight and UI/UX/Design topics. That’s one of my few complaints about the conference – I would have liked one more day and/or more sessions per day. Overall, the quality of the workshops and sessions was pretty high. In fact, looking back at my collection of conferences I’ve visited in the past I’d say that MIX10 ranks somewhere near the top spot. Here’s an overview of the workshops/sessions I attended (I’ll leave out the keynotes): Day 0 (Workshops on Sunday) Design Fundamentals for Developers Robby Ingebretsen is the man! Great workshop in three parts with the perfect mix of examples, well-structured definition of terminology and the right dose of humor. Robby was part of the WPF team before founding his own company so he not only has a strong interest in design (and the skillz!) but also the technical background.   Design Tools and Techniques Originally announced to be held by Arturo Toledo, the Rosso brothers from ArcheType filled in for the first two parts, and Corrina Black had a pretty general part about the Windows Phone UI. The first two thirds were a mixed bag; the two guys definitely knew what they were talking about, and the demos were great, but the talk lacked the preparation and polish of a truly great presentation. Corrina was not allowed to go into too much detail before the keynote on Monday, but the session was still very interesting as it showed how much thought went into the Windows Phone UI (and there’s always a lot to learn when people talk about their thought process). Day 1 (Monday) Designing Rich Experiences for Data-Centric Applications I wonder whether there was ever a test-run for this session, but what Ken Azuma and Yoshihiro Saito delivered in the first 15 minutes of a 30-minutes-session made me walk out. A commercial for a product (just great: a video showing a SharePoint plug-in in an all-Japanese UI) combined with the most generic blah blah one could imagine. EPIC FAIL.   Great User Experiences: Seamlessly Blending Technology & Design I switched to this session from the one above but I guess I missed the interesting part – what I did catch was what looked like a “look at the cool stuff we did” without being helpful. Or maybe I was just in a bad mood after the other session.   The Art, Technology and Science of Reading This talk by Kevin Larson was very interesting, but was more a presentation of what Microsoft is doing in research (pretty impressive) and in the end lacked a bit the helpful advice one could have hoped for.   10 Ways to Attack a Design Problem and Come Out Winning Robby Ingebretsen again, and again a great mix of theory and practice. The clean and simple, yet effective, UI of the reader app resulted in a simultaneous “wow” of Jens and me. If you’d watch only one session video, this should be it. Microsoft has to bring Robby back next year! Day 2 (Tuesday) Touch in Public: Multi-touch Interaction Design for Kiosks & Architectural Experiences Very interesting session by Jason Brush, a great inspiration with many details to look out for in the examples. Exactly what I was hoping for – and then some!   Designing Bing: Heart and Science How hard can it be to design the UI for a search engine? An input field and a list of results, that should be it, right? Well, not so fast! The talk by Paul Ray showed the many iterations to finally get it right (up to the choice of a specific blue for the links). And yes, I want an eye-tracking device to play around with!   The Elephant in the Room When Nishant Kothary presented a long list of what his session was not about, I told to myself (not having the description text present) “Am I in the wrong talk? Should I leave?”. Boy, was I wrong. A great talk about human factors in the process of designing stuff.   An Hour with Bill Buxton Having seen Bill Buxton’s presentation in the keynote, I just had to see this man again – even though I didn’t know what to expect. Being more or less unplanned and intended to be more of a conversation, the session didn’t provide a wealth of immediately useful information. Nevertheless Bill Buxton was impressive with his huge knowledge of seemingly everything. But this could/should have been a session some when in the evening and not in parallel to at least two other interesting talks. Day 3 (Wednesday) Design the Ordinary, Like the Fixie This session by DL Byron and Kevin Tamura started really well and brought across the message to keep things simple. But towards the end the talk lost some of its steam. And, as a member of the audience pointed out, they kind of ignored their own advice when they used a fancy presentation software other then PowerPoint that sometimes got in the way of showing things.   Developing Natural User Interfaces Speaking of alternative presentation software, Joshua Blake definitely had the most remarkable alternative to PowerPoint, a self-written program called NaturalShow that was controlled using multi-touch on a touch screen. Not a PowerPoint-killer, but impressive nevertheless. The (excellent) talk itself was kind of eye-opening in regard to what “multi-touch support” on various platforms (WPF, Silverlight, Windows Phone) actually means.   Treat your Content Right The talk by Tiffani Jones Brown wasn’t even on my planned schedule, but somehow I ended up in that session – and it was great. And even for people who don’t necessarily have to write content for websites, some points made by Tiffani are valid in many places, notably wherever you put texts with more than a single word into your UI. Creating Effective Info Viz in Microsoft Silverlight The last session of MIX10 I attended was kind of disappointing. At first things were very promising, with Matthias Shapiro giving a brief but well-structured introduction to info graphics and interactive visualizations. Then the live-coding began and while the result was interesting, too much time was spend on wrestling to get the code working. Ending earlier than planned, the talk was a bit light on actual content, but at least it included a nice list of resources. Conclusion It could be felt all across MIX10, UIs will take a huge leap forward; in fact, there are enough examples that have already. People who both have the technical know-how and at least a basic understanding of design (“literacy” as Bill Buxton called it) are in high demand. The concept of the MIX conference and initiatives like design.toolbox shows that Microsoft understands very well that frontend developers have to acquire new knowledge besides knowing how to hack code and putting buttons on a form. There are extremely exciting times before us, with lots of opportunity for those who are eager to develop their skills, that is for sure.

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • How to get IMediaControl.Run() to start a file playing with no delay

    - by MusiGenesis
    I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next. I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames. I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches: Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video. Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file. Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least). Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time. The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing. Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable. More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this. Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch. Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • NHibernate Mapping and Querying Where Tables are Related But No Foreign Key Constraint

    - by IanT8
    I'm fairly new to NHibernate, and I need to ask a couple of questions relating to a very frequent scenario. The following simplified example illustrates the problem. I have two tables named Equipment and Users. Users is a set of system administrators. Equipment is a set of machinery. Tables: Users table has UserId int and LoginName nvarchar(64). Equipment table has EquipId int, EquipType nvarchar(64), UpdatedBy int. Behavior: System administrators can make changes to Equipment, and when they do, the UpdatedBy field of Equipment is "normally" set to their User Id. Users can be deleted at any time. New Equipment items have an UpdatedBy value of null. There's no foreign key constraint on Equipment.UpdatedBy which means: Equipment.UpdatedBy can be null. Equipment.UpdatedBy value can be = existing User.UserId value Equipment.UpdatedBy value can be = non-existent User.UserId value To find Equipment and who last updated the Equipment, I might query like this: select E.EquipId, E.EquipName, U.UserId, U.LoginName from Equipment E left outer join Users U on. E.UpdatedBy = U.UserId Simple enough. So how to do that in NHibernate? My mappings might be as follows: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Data" assembly="Data"> <class name="User" table="Users"> <id name="Id" column="UserId" unsaved-value="0"> <generator class="native" /> </id> <property name="LoginName" unique="true" not-null="true" /> </class> <class name="Equipment" table="Equipment"> <id name="Id" column="EquipId" type="int" unsaved-value="0"> <generator class="native" /> </id> <property name="EquipType" /> <many-to-one name="UpdatedBy" class="User" column="UpdatedBy" /> </class> </hibernate-mapping> So how do I get all items of equipment and who updated them? using (ISession session = sessionManager.OpenSession()) { List<Data.Equipment> equipList = session .CreateCriteria<Data.Equipment>() // Do I need to SetFetchmode or specify that I // want to join onto User here? If so how? .List<Data.Equipment>(); foreach (Data.Equipment item in equipList) { Debug.WriteLine("\nEquip Id: " + item.Id); Debug.WriteLine("Equip Type: " + item.EquipType); if (item.UpdatedBy.Country != null) Debug.WriteLine("Updated By: " + item.UpdatedBy.LoginName); else Debug.WriteLine("Updated by: Nobody"); } } When Equipment.UpdatedBy = 3 and there is no Users.UserId = 3, the above fail I also have a feeling that the generated SQL is a select all from Equipment followed by many select columns from Users where UserId = n whereas I'd expected NHibernate to left join as per my plain ordinary SQL and do one hit. If I can tell NHibernate to do the query in one hit, how do I do that? Time is of the essence on my project, so any help you could provide is gratefully received. If you're speculating about how NHibernate might work in this scenario, please say you're not absolutely sure. Many thanks.

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Why doesn't my QsciLexerCustom subclass work in PyQt4 using QsciScintilla?

    - by Jon Watte
    My end goal is to get Erlang syntax highlighting in QsciScintilla using PyQt4 and Python 2.6. I'm running on Windows 7, but will also need Ubuntu support. PyQt4 is missing the necessary wrapper code for the Erlang lexer/highlighter that "base" scintilla has, so I figured I'd write a lightweight one on top of QsciLexerCustom. It's a little bit problematic, because the Qsci wrapper seems to really want to talk about line+index rather than offset-from-start when getting/setting subranges of text. Meanwhile, the lexer gets arguments as offset-from-start. For now, I get a copy of the entire text, and split that up as appropriate. I have the following lexer, and I apply it with setLexer(). It gets all the appropriate calls when I open a new file and sets this as the lexer, and prints a bunch of appropriate lines based on what it's doing... but there is no styling in the document. I tried making all the defined styles red, and the document is still stubbornly black-on-white, so apparently the styles don't really "take effect" What am I doing wrong? If nobody here knows, what's the appropriate discussion forum where people might actually know these things? (It's an interesting intersection between Python, Qt and Scintilla, so I imagine the set of people who would know is small) Let's assume prefs.declare() just sets up a dict that returns the value for the given key (I've verified this -- it's not the problem). Let's assume scintilla is reasonably properly constructed into its host window QWidget. Specifically, if I apply a bundled lexer (such as QsciLexerPython), it takes effect and does show styled text. prefs.declare('font.name.margin', "MS Dlg") prefs.declare('font.size.margin', 8) prefs.declare('font.name.code', "Courier New") prefs.declare('font.size.code', 10) prefs.declare('color.editline', "#d0e0ff") class LexerErlang(Qsci.QsciLexerCustom): def __init__(self, obj = None): Qsci.QsciLexerCustom.__init__(self, obj) self.sci = None self.plainFont = QtGui.QFont() self.plainFont.setPointSize(int(prefs.get('font.size.code'))) self.plainFont.setFamily(prefs.get('font.name.code')) self.marginFont = QtGui.QFont() self.marginFont.setPointSize(int(prefs.get('font.size.code'))) self.marginFont.setFamily(prefs.get('font.name.margin')) self.boldFont = QtGui.QFont() self.boldFont.setPointSize(int(prefs.get('font.size.code'))) self.boldFont.setFamily(prefs.get('font.name.code')) self.boldFont.setBold(True) self.styles = [ Qsci.QsciStyle(0, QtCore.QString("base"), QtGui.QColor("#000000"), QtGui.QColor("#ffffff"), self.plainFont, True), Qsci.QsciStyle(1, QtCore.QString("comment"), QtGui.QColor("#008000"), QtGui.QColor("#eeffee"), self.marginFont, True), Qsci.QsciStyle(2, QtCore.QString("keyword"), QtGui.QColor("#000080"), QtGui.QColor("#ffffff"), self.boldFont, True), Qsci.QsciStyle(3, QtCore.QString("string"), QtGui.QColor("#800000"), QtGui.QColor("#ffffff"), self.marginFont, True), Qsci.QsciStyle(4, QtCore.QString("atom"), QtGui.QColor("#008080"), QtGui.QColor("#ffffff"), self.plainFont, True), Qsci.QsciStyle(5, QtCore.QString("macro"), QtGui.QColor("#808000"), QtGui.QColor("#ffffff"), self.boldFont, True), Qsci.QsciStyle(6, QtCore.QString("error"), QtGui.QColor("#000000"), QtGui.QColor("#ffd0d0"), self.plainFont, True), ] print("LexerErlang created") def description(self, ix): for i in self.styles: if i.style() == ix: return QtCore.QString(i.description()) return QtCore.QString("") def setEditor(self, sci): self.sci = sci Qsci.QsciLexerCustom.setEditor(self, sci) print("LexerErlang.setEditor()") def styleText(self, start, end): print("LexerErlang.styleText(%d,%d)" % (start, end)) lines = self.getText(start, end) offset = start self.startStyling(offset, 0) print("startStyling()") for i in lines: if i == "": self.setStyling(1, self.styles[0]) print("setStyling(1)") offset += 1 continue if i[0] == '%': self.setStyling(len(i)+1, self.styles[1]) print("setStyling(%)") offset += len(i)+1 continue self.setStyling(len(i)+1, self.styles[0]) print("setStyling(n)") offset += len(i)+1 def getText(self, start, end): data = self.sci.text() print("LexerErlang.getText(): " + str(len(data)) + " chars") return data[start:end].split('\n') Applied to the QsciScintilla widget as follows: _lexers = { 'erl': (Q.SCLEX_ERLANG, LexerErlang), 'hrl': (Q.SCLEX_ERLANG, LexerErlang), 'html': (Q.SCLEX_HTML, Qsci.QsciLexerHTML), 'css': (Q.SCLEX_CSS, Qsci.QsciLexerCSS), 'py': (Q.SCLEX_PYTHON, Qsci.QsciLexerPython), 'php': (Q.SCLEX_PHP, Qsci.QsciLexerHTML), 'inc': (Q.SCLEX_PHP, Qsci.QsciLexerHTML), 'js': (Q.SCLEX_CPP, Qsci.QsciLexerJavaScript), 'cpp': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'h': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'cxx': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'hpp': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'c': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'hxx': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'tpl': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'xml': (Q.SCLEX_XML, Qsci.QsciLexerXML), } ... inside my document window class ... def addContentsDocument(self, contents, title): handler = self.makeScintilla() handler.title = title sci = handler.sci sci.append(contents) self.tabWidget.addTab(sci, title) self.tabWidget.setCurrentWidget(sci) self.applyLexer(sci, title) EventBus.bus.broadcast('command.done', {'text': 'Opened ' + title}) return handler def applyLexer(self, sci, title): (language, lexer) = language_and_lexer_from_title(title) if lexer: l = lexer() print("making lexer: " + str(l)) sci.setLexer(l) else: print("setting lexer by id: " + str(language)) sci.SendScintilla(Qsci.QsciScintillaBase.SCI_SETLEXER, language) linst = sci.lexer() print("lexer: " + str(linst)) def makeScintilla(self): sci = Qsci.QsciScintilla() sci.setUtf8(True) sci.setTabIndents(True) sci.setIndentationsUseTabs(False) sci.setIndentationWidth(4) sci.setMarginsFont(self.smallFont) sci.setMarginWidth(0, self.smallFontMetrics.width('00000')) sci.setFont(self.monoFont) sci.setAutoIndent(True) sci.setBraceMatching(Qsci.QsciScintilla.StrictBraceMatch) handler = SciHandler(sci) self.handlers[sci] = handler sci.setMarginLineNumbers(0, True) sci.setCaretLineVisible(True) sci.setCaretLineBackgroundColor(QtGui.QColor(prefs.get('color.editline'))) return handler Let's assume the rest of the application works, too (because it does :-)

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

  • How can I provide maximum integration between a calendar-like webapp and desktop calendar applicatio

    - by Joshua Carmody
    I've been assigned to upgrade/rewrite a webapp that my company uses to schedule conference calls. One of the goals of the upgrade is to improve integration between the application and our user's Outlook calendars (and ideally other calendar programs as well). At present, when a user is viewing the details of a scheduled conference call on the webapp, they can click an "Add to Outlook calendar" link, which points them to a dynamically generated .ical file. On most of our users' systems, Outlook opens the file by default, bringing up the "create calendar appointment" window with the concall information pre-populated. This link creates a 1-time appointment only, and has to be clicked on for each occurrence of the call. So if a call happened every Monday in June, you would have to click 4 links to add all the appointments to your calendar. This is the full extent of our current level of integration. Ideally, we will be able to upgrade the system so that users can "subscribe" to a con call, which would mean not just the current call, but all calls in a reoccurring series would appear in the user's calendar with a single click. If one call in a series was cancelled, or rescheduled, that call's appointment would change in the users' calendar, without the user having to do anything, and without upsetting the rest of the series' appointments. Also, any changes to the call's info (say, the phone number was changed) would automatically be updated in the Outlook calendars of anyone who subscribed, without them having to come back to the webapp to double-check that their information is up to date. Ideally this would also work with other popular calendar programs, as well as Google Calendar. I don't know if we'll be able to achieve that level of integration, but I'd like to get as close to that as we can. Additional details and challenges: We aren't running Exchange on a public server, and I'm not likely to be able to get that changed Assume that our users are basically "the general internet public". Our users are not members of our office's network, nor can they be. We can't set up network logins or Exchange accounts for them. Some of our users are not using Outlook, but some other calendar program. Of the ones that are using Outlook, not all are using the same version. We have users in more than 50 countries that are using this webapp. Synchronization would be one-directional. Nobody can make changes in their own calendars and expect the server to reflect them/replicate them to other users Current conference calling application is written in ColdFusion. Rewrite will probably be in ASP.NET, but I haven't confirmed that yet. Solutions that work with either or both technologies are appreciated. I know that .ical files can theoretically contain more than one event, but in my own experiments I haven't had success in getting Outlook (2003) to add more than one event at a time using the .ical file method. Maybe someone knows how to set up a multi-event .ical file that Outlook will accept? Could a link to such an .ical file be "subscribed" to? Is there such thing as a calendar RSS feed? Could I simulate running an exchange server? Any other ideas? Thanks everyone!

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Very Strange behavior in custom dataGrid

    - by Markus
    Hi everybody, I asked this question already in a former post, but nobody could answer this question correctly. So I try to post the problem again, to make sure it's not a bug. I have a dataGrid with a custom itemRenderer. Everytime I tab at least two times on the dataGrid, the cell below the one I taped gets selected. This doesn't happen if I uncomment the code in the method saveBackDataGridContent()! The second problem is that if the Line is shorter than the entered text, a horizontalScrollBar will get active, although I set setStyle("horizontalScrollPolicy", "off");... Who can solve that one? CustomRenderer.mxml: <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" initialize="dataService.send()"> <mx:Script> <![CDATA[ import components.ChoiceRenderer; import mx.rpc.events.ResultEvent; import mx.events.DataGridEvent; private function resultHandler(event:ResultEvent):void { var doc:XML = event.result as XML; testGrid.dataProvider = doc.Records.BackSide; } private function saveBackDataGridContent(event:DataGridEvent):void{ testGrid.dataProvider[event.rowIndex].TextElement = event.currentTarget.itemEditorInstance.text; } ]]> </mx:Script> <mx:HTTPService id="dataService" result="resultHandler(event)" url = "data/example.xml" resultFormat="e4x"/> <mx:DataGrid id="testGrid" editable="true" itemEditEnd="saveBackDataGridContent(event)"> <mx:columns> <mx:DataGridColumn itemRenderer="components.ChoiceRenderer" width="230"/> </mx:columns> </mx:DataGrid> </mx:Application> ChoiceRenderer.as package components { import mx.containers.HBox; import mx.controls.CheckBox; import mx.controls.Label; public class ChoiceRenderer extends HBox { private var correctAnswer:CheckBox; private var choiceLabel:Label; public function ChoiceRenderer() { super(); setStyle("horizontalScrollPolicy", "off"); correctAnswer = new CheckBox; addChild(correctAnswer); choiceLabel = new Label; addChild(choiceLabel); } override public function set data(xmldata:Object):void{ if(xmldata.name() == "BackSide"){ super.data = xmldata.TextElement[0]; choiceLabel.text = xmldata.TextElement[0].toString(); } } } } example.xml <TopContainer> <Records> <BackSide> <TextElement>first</TextElement> </BackSide> <BackSide> <TextElement>second</TextElement> </BackSide> <BackSide> <TextElement>third</TextElement> </BackSide> <BackSide> <TextElement>fourth</TextElement> </BackSide> <BackSide> <TextElement>fifth</TextElement> </BackSide> <BackSide> <TextElement>sixth</TextElement> </BackSide> </Records> Thanks Markus

    Read the article

  • Const-correctness semantics in C++

    - by thirtythreeforty
    For fun and profit™, I'm writing a trie class in C++ (using the C++11 standard.) My trie<T> has an iterator, trie<T>::iterator. (They're all actually functionally const_iterators, because you cannot modify a trie's value_type.) The iterator's class declaration looks partially like this: template<typename T> class trie<T>::iterator : public std::iterator<std::bidirectional_iterator_tag, T> { friend class trie<T>; struct state { state(const trie<T>* const node, const typename std::vector<std::pair<typename T::value_type, std::unique_ptr<trie<T>>>>::const_iterator& node_map_it ) : node{node}, node_map_it{node_map_it} {} // This pointer is to const data: const trie<T>* node; typename std::vector<std::pair<typename T::value_type, std::unique_ptr<trie<T>>>>::const_iterator node_map_it; }; public: typedef const T value_type; iterator() =default; iterator(const trie<T>* node) { parents.emplace(node, node->children.cbegin()); // ... } // ... private: std::stack<state> parents; // ... }; Notice that the node pointer is declared const. This is because (in my mind) the iterator should not be modifying the node that it points to; it is just an iterator. Now, elsewhere in my main trie<T> class, I have an erase function that has a common STL signature--it takes an iterator to data to erase (and returns an iterator to the next object). template<typename T> typename trie<T>::iterator trie<T>::erase(const_iterator it) { // ... // Cannot modify a const object! it.parents.top().node->is_leaf = false; // ... } The compiler complains because the node pointer is read-only! The erase function definitely should modify the trie that the iterator points to, even though the iterator shouldn't. So, I have two questions: Should iterator's constructors be public? trie<T> has the necessary begin() and end() members, and of course trie<T>::iterator and trie<T> are mutual friends, but I don't know what the convention is. Making them private would solve a lot of the angst I'm having about removing the const "promise" from the iterator's constructor. What are the correct const semantics/conventions regarding the iterator and its node pointer here? Nobody has ever explained this to me, and I can't find any tutorials or articles on the Web. This is probably the more important question, but it does require a good deal of planning and proper implementation. I suppose it could be circumvented by just implementing 1, but it's the principle of the thing!

    Read the article

< Previous Page | 34 35 36 37 38 39 40  | Next Page >