Search Results

Search found 16455 results on 659 pages for 'hosts allow'.

Page 144/659 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • How to enable real time CSS editing in chrome?

    - by narayanpatra
    I have seem a lot of videos in which developers changing CSS on the fly in chrome. I tried the same thing but chrome dod not allow me to change the code. I can't write on the style sheet. Is there any specific setting to do this? Kindly help. EDIT: To edit the css, I right clock on an element, select inspect element. It will open the console. I select the id of the element and go to style.css in resource and try to change the css. It do not allow me to write there.

    Read the article

  • $.each - wait for jSON request before proceeding

    - by GaaayLooord
    I have an issue with the below code: the jQuery.each is speeding on without waiting for the JSON request to finish. As a result, the 'thisVariationID' and 'thisOrderID' variables are being reset by the latest iteration of a loop before they can be used in the slower getJSON function. Is there a way to make each iteration of the the .each wait until completion of the getJSON request and callback function before moving on to the next iteration? $.each($('.checkStatus'), function(){ thisVariationID = $(this).attr('data-id'); thisOrderID = $(this).attr('id'); $.getJSON(jsonURL+'?orderID='+thisOrderID+'&variationID='+thisVariationID+'&callback=?', function(data){ if (data.response = 'success'){ //show the tick. allow the booking to go through $('#loadingSML'+thisVariationID).hide(); $('#tick'+thisVariationID).show(); }else{ //show the cross. Do not allow the booking to be made $('#loadingSML'+thisVariationID).hide(); $('#cross'+thisVariationID).hide(); $('#unableToReserveError').slideDown(); //disable the form $('#OrderForm_OrderForm input').attr('disabled','disabled'); } }) })

    Read the article

  • PPPTP VPN from Ubuntu cannot connect

    - by Andrea Polci
    I'm trying to configure under Linux (Kubuntu 9.10) a VPN I already use from Windows. I installed the network-manager-pptp package and added the vpn under Network Manager. These are the parameter under "advanced" button: Authentication Methods: PAP, CHAP, MSCHAP, SMCHAP2, EAP (I tried also with MSCHAP and MSCHAP2 only) Use MPPE Encryption: yes Crypto: Any Use stateful encryption: no Compression: Allow BSD compression: yes Allow Deflate compression: yes Allow TCP header compression: yes Send PPP echo packets: no When I try to connnect it doesn't work and this is what I get in the system log: 2010-04-08 13:53:47 pcelena NetworkManager <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4931 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections 2010-04-08 13:53:47 pcelena pppd[4932] Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN plugin state changed: 3 2010-04-08 13:53:47 pcelena pppd[4932] pppd 2.4.5 started by root, uid 0 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN connection 'MYVPN' (Connect) reply received. 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. 2010-04-08 13:53:47 pcelena pppd[4932] Using interface ppp0 2010-04-08 13:53:47 pcelena pppd[4932] Connect: ppp0 <--> /dev/pts/2 2010-04-08 13:53:47 pcelena pptp[4934] nm-pptp-service-4931 log[main:pptp.c:314]: The synchronous pptp option is NOT activated 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 1, peer's call ID 14800). 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] LCP terminated by peer 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:929]: Call disconnect notification received (call id 14800) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:788]: Received Stop Control Connection Request. 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 4 'Stop-Control-Connection-Reply' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) 2010-04-08 13:53:48 pcelena pppd[4932] Modem hangup 2010-04-08 13:53:48 pcelena pppd[4932] Connection terminated. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:48 pcelena pppd[4932] Exit. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state changed: 6 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state change reason: 0 2010-04-08 13:53:48 pcelena NetworkManager <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. 2010-04-08 13:53:48 pcelena NetworkManager <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001390] ensure_killed(): waiting for vpn service pid 4931 to exit 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001479] ensure_killed(): vpn service pid 4931 cleaned up Does anyone has suggestion on what can be the problem and how to make it work?

    Read the article

  • Cannot turn on "Network Discovery and File Sharing" when Windows Firewall is enabled

    - by Cheeso
    I have a problem similar to this one. Windows Firewall prevents File and Printer sharing from working and Why does File and Printer Sharing keep turning off in Windows 7? I cannot turn on Network Discovery. This is Windows 7 Home Premium, x64. It's a Dell XPS 1340 and Windows came installed from the OEM. This used to work. Now it doesn't. I don't know what has changed. In windows Explorer, the UI looks like this: When I click the yellow panel that says "Click to change...", the panel disappears, then immediately reappears, with exactly the same text. If I go through the control panel "Network and Sharing Center" thing, the UI looks like this: If I tick the box to "turn on network discovery", the "Save Changes" button becomes enabled. If I then click that button, the dialog box just closes, with no message or confirmation. Re-opening the same dialog box shows that Network Discovery has not been turned on. If I turn off Windows Firewall, I can then turn on Network Discovery via either method. The machine is connected to a wireless home network, via a router. The network is marked as "Home Network" in the Network and Sharing Center, which I think corresponds to the "Private" profile in Windows Firewall Advanced Settings app. (Confirm?) The PC is not part of a domain, and has never been part of a domain. The machine is not bridging any networks. There is a regular 100baseT connector but I have the network adapter for that disabled in Windows. Something else that seems odd. Within Windows Firewall Advanced Settings, there are no predefined rules available. If I click the "New Rule...." Action on the action pane, the "Predefined" option is greyed out. like this: In order to attempt to allow the network discovery protocols through on the private network, I hand-coded a bunch of rules, intending to allow the necessary UPnP and WDP protocols supporting network discovery. I copied them from a working Windows 7 Ultimate PC, running on the same network. This did not work. Even with the hand-coded rules, I still cannot turn on Network Discovery. I looked on the interwebs, and the only solution that appears to work is a re-install of Windows. Seriously? If I try netsh advfirewall firewall set rule group="Network Discovery" new enable=Yes ...it says "No rules match the specified criteria" EDIT: by the way, these services are running. DNS Client Function Discovery Resource Publication SSDP Discovery UPnP Device Host in any case, since it works with no firewall, I would assume all necessary services are present and running. The issue is a firewall thing, but I don't know how to diagnose further, or fix it. Q1: Is there a way to definitively insure the correct holes are punched through the Windows Firewall to allow Network Discovery to function? Q2: Should I expect the "predefined" firewall rules to be greyed out? Q3: Why did this change?

    Read the article

  • Redmine with Apache 2 + Passenger nightmare --- site is up and available, but Redmine doesn't execute

    - by CptSupermrkt
    I was determined to figure this out myself, but I've been at it for a total of more than 10 hours, and I just can't figure this out. First, let me detail my environment (which I cannot change): Server version: Apache/2.2.15 (Unix) Ruby version: ruby 1.9.3p448 Rails version: Rails 4.0.1 Passenger version: Phusion Passenger version 4.0.5 Redmine version: 2.3.3 I have followed the Redmine instructions all the way through the test webserver to check that installation was successful with this command: ruby script/rails server webrick -e production The roadblock which I cannot overcome is getting Apache and Passenger to interpret and properly serve Redmine. I have searched pretty much every possible link within the first 10 pages or so of Google results. Everywhere I go I come across conflicting/contradicting/outdated information. We have a "weird" setup with Apache (which I inherited and cannot change). Redmine needs to be served through SSL, but Apache already has another website it's serving through SSL called Twiki. By "weird", what I mean is that our file structure is entirely different from all the tutorials out there on this version of Apache which have directories like "available-sites" and such. Here are the abbreviated versions of some of our config files: /etc/httpd/conf/httpd.conf (the global configuration file --- note that NO VirtualHost is defined here): ServerRoot "/etc/httpd" ... LoadModule passenger_module /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5/libout/apache2/mod_passenger.so PassengerRoot /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5 PassengerDefaultRuby /usr/local/pkg/ruby/1.9.3-p448/bin/ruby Include conf.d/*.conf ... User apache Group apache ... DocumentRoot "/var/www/html" So just to clarify, the above httpd.conf file does NOT have a VirtualHost section. /etc/httpd/conf.d/ssl.conf (defines the VirtualHost for ssl): Listen 443 <VirtualHost _default_:443> SSLEngine on ... SSLCertificateFile /etc/pki/tls/certs/localhost.crt </VirtualHost> /etc/httpd/conf.d/twiki.conf (this works just fine --- note this does NOT define a VirtualHost): ScriptAlias /twiki/bin/ "/var/www/twiki/bin/" Alias /twiki/ "/var/www/twiki/" <Directory "/var/www/twiki/bin"> AllowOverride None Order Deny,Allow Deny from all AuthType Basic AuthName "our team" AuthBasicProvider ldap ...a lot of ldap and authorization stuff Options ExecCGI FollowSymLinks SetHandler cgi-script </Directory> /etc/httpd/conf.d/redmine.conf: Alias /redmine/ "/var/www/redmine/public/" <Directory "/var/www/redmine/public"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> The amazing thing is that this doesn't completely NOT work: I can successfully open up https://someserver/redmine/ with SSL and the https://someserver/twiki/ site remains unaffected. This tells me that it IS possible to have two separate sites up with one SSL configuration, so I don't think that's the problem. The problem is is that it opens up to the file index. I can navigate around my Redmine file structure, but no code ever gets executed. For example, there is a file included with Redmine called dispatch.fcgi in the public folder. https://someserver/redmine/dispatch.fcgi opens, but just as plain text code in the browser. As I understand it, in the case of using Passenger, CGI and FastCGI stuff is irrelevant/unused.

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • Munin not creating HTML files in Ubuntu Server 14.04

    - by lepe
    I have used munin in several servers and this is the first time is taking me so much time to set it up. When I telnet munin directly, I can list the services, there is no error at the logs and munin its being updated every 5 minutes. However no html files are created. I'm using the default location (/var/cache/munin/www) and I can confirm the permissions of that directory are set to munin.munin (IP and domain has been changed) munin.conf: dbdir /var/lib/munin htmldir /var/cache/munin/www logdir /var/log/munin rundir /var/run/munin [example.com;] address 100.100.50.200 munin-node.conf: log_level 4 log_file /var/log/munin/munin-node.log pid_file /var/run/munin/munin-node.pid background 1 setsid 1 user root group root host_name example.com allow ^127\.0\.0\.1$ allow ^100\.100\.50\.200$ allow ^::1$ /etc/hosts : 100.100.50.200 example.com 127.0.0.1 localhost $ telnet example.com 4949 Trying 100.100.50.200... Connected to example.com. Escape character is '^]'. # munin node at example.com list apache_accesses apache_processes apache_volume cpu cpuspeed df df_inode entropy fail2ban forks fw_packets if_err_eth0 if_err_eth1 if_eth0 if_eth1 interrupts ipmi_fans ipmi_power ipmi_temp irqstats load memory munin_stats mysql_bin_relay_log mysql_commands mysql_connections mysql_files_tables mysql_innodb_bpool mysql_innodb_bpool_act mysql_innodb_insert_buf mysql_innodb_io mysql_innodb_io_pend mysql_innodb_log mysql_innodb_rows mysql_innodb_semaphores mysql_innodb_tnx mysql_myisam_indexes mysql_network_traffic mysql_qcache mysql_qcache_mem mysql_replication mysql_select_types mysql_slow mysql_sorts mysql_table_locks mysql_tmp_tables ntp_2001:e40:100:208::123 ntp_91.189.94.4 ntp_kernel_err ntp_kernel_pll_freq ntp_kernel_pll_off ntp_offset ntp_states open_files open_inodes postfix_mailqueue postfix_mailvolume proc_pri processes swap threads uptime users vmstat fetch df _dev_sda3.value 2.1762874086869 _sys_fs_cgroup.value 0 _run.value 0.0503536980635825 _run_lock.value 0 _run_shm.value 0 _run_user.value 0 _dev_sda5.value 0.0176986285727571 _dev_sda8.value 1.08464646179852 _dev_sda7.value 0.0346633563514803 _dev_sda9.value 6.81031810822797 _dev_sda6.value 9.0932802215469 . /var/log/munin/munin-node.log Process Backgrounded 2014/08/16-14:13:36 Munin::Node::Server (type Net::Server::Fork) starting! pid(19610) Binding to TCP port 4949 on host 100.100.50.200 with IPv4 2014/08/16-14:23:11 CONNECT TCP Peer: "[100.100.50.200]:55949" Local: "[100.100.50.200]:4949" 2014/08/16-14:36:16 CONNECT TCP Peer: "[100.100.50.200]:56209" Local: "[100.100.50.200]:4949" /var/log/munin/munin-update.log ... 2014/08/16 14:30:01 [INFO]: Starting munin-update 2014/08/16 14:30:01 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:35:02 [INFO]: Starting munin-update 2014/08/16 14:35:02 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:40:01 [INFO]: Starting munin-update 2014/08/16 14:40:01 [INFO]: Munin-update finished (0.00 sec) $ ls -la /var/cache/munin/www/ drwxr-xr-x 3 munin munin 19 Aug 16 13:55 . drwxr-xr-x 3 root root 16 Aug 16 13:54 .. drwxr-xr-x 2 munin munin 4096 Aug 16 13:55 static Any ideas on why it is not working? EDIT This is how /var/log/munin/ log looks like after some days: -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-graph.log -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-html.log -rw-rw-r-- 1 munin 0 Aug 16 13:55 munin-html.log -rw-r----- 1 munin 0 Aug 19 06:18 munin-limits.log -rw-r----- 1 munin 15K Aug 18 14:10 munin-limits.log.1 -rw-r----- 1 munin 1.8K Aug 18 06:15 munin-limits.log.2.gz -rw-rw-r-- 1 munin 1.3K Aug 17 06:15 munin-limits.log.3.gz -rw-r--r-- 1 root 6.5K Aug 16 13:55 munin-node-configure.log -rw-r--r-- 1 root 0 Aug 17 06:18 munin-node.log -rw-r--r-- 1 root 420 Aug 16 14:52 munin-node.log.1.gz -rw-r----- 1 munin 0 Aug 19 06:18 munin-update.log -rw-r----- 1 munin 11K Aug 18 14:10 munin-update.log.1 -rw-r----- 1 munin 1.6K Aug 18 06:15 munin-update.log.2.gz -rw-rw-r-- 1 munin 1.5K Aug 17 06:15 munin-update.log.3.gz

    Read the article

  • PPTP VPN from Ubuntu cannot connect

    - by Andrea Polci
    I'm trying to configure under Linux (Kubuntu 9.10) a VPN I already use from Windows. I installed the network-manager-pptp package and added the VPN under Network Manager. These are the parameters under "advanced" button: Authentication Methods: PAP, CHAP, MSCHAP, MSCHAP2, EAP (I also tried "MSCHAP, MSCHAP2") Use MPPE Encryption: yes Crypto: Any Use stateful encryption: no Allow BSD compression: yes Allow Deflate compression: yes Allow TCP header compression: yes Send PPP echo packets: no When I try to connnect it doesn't work and this is what I get in the system log: 2010-04-08 13:53:47 pcelena NetworkManager <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4931 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections 2010-04-08 13:53:47 pcelena pppd[4932] Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN plugin state changed: 3 2010-04-08 13:53:47 pcelena pppd[4932] pppd 2.4.5 started by root, uid 0 2010-04-08 13:53:47 pcelena NetworkManager <info> VPN connection 'MYVPN' (Connect) reply received. 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:47 pcelena NetworkManager SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. 2010-04-08 13:53:47 pcelena pppd[4932] Using interface ppp0 2010-04-08 13:53:47 pcelena pppd[4932] Connect: ppp0 <--> /dev/pts/2 2010-04-08 13:53:47 pcelena pptp[4934] nm-pptp-service-4931 log[main:pptp.c:314]: The synchronous pptp option is NOT activated 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. 2010-04-08 13:53:47 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 1, peer's call ID 14800). 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] CHAP authentication succeeded 2010-04-08 13:53:48 pcelena pppd[4932] LCP terminated by peer 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:929]: Call disconnect notification received (call id 14800) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_disp:pptp_ctrl.c:788]: Received Stop Control Connection Request. 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 4 'Stop-Control-Connection-Reply' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' 2010-04-08 13:53:48 pcelena pptp[4927] nm-pptp-service-4918 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) 2010-04-08 13:53:48 pcelena pppd[4932] Modem hangup 2010-04-08 13:53:48 pcelena pppd[4932] Connection terminated. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) 2010-04-08 13:53:48 pcelena pppd[4932] Exit. 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin failed: 1 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state changed: 6 2010-04-08 13:53:48 pcelena NetworkManager <info> VPN plugin state change reason: 0 2010-04-08 13:53:48 pcelena NetworkManager <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. 2010-04-08 13:53:48 pcelena NetworkManager <info> Policy set 'Auto eth0' (eth0) as default for routing and DNS. 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001390] ensure_killed(): waiting for vpn service pid 4931 to exit 2010-04-08 13:54:01 pcelena NetworkManager <debug> [1270727641.001479] ensure_killed(): vpn service pid 4931 cleaned up The error that sticks out here is "pppd[4932] LCP terminated by peer". Does anyone has suggestion on what can be the problem and how to make it work?

    Read the article

  • KVM + Cloudmin + IpTables

    - by Alex
    I have a KVM virtualization on a machine. I use Ubuntu Server + Cloudmin (in order to manage virtual machine instances). On a host system I have four network interfaces: ebadmin@saturn:/var/log$ ifconfig br0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 inet addr:192.168.0.253 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::1278:d2ff:feec:1638/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:589337 errors:0 dropped:0 overruns:0 frame:0 TX packets:334357 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:753652448 (753.6 MB) TX bytes:43385198 (43.3 MB) br1 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16995 errors:0 dropped:0 overruns:0 frame:0 TX packets:13309 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2059264 (2.0 MB) TX bytes:1763980 (1.7 MB) eth0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:610558 errors:0 dropped:0 overruns:0 frame:0 TX packets:332382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:769477564 (769.4 MB) TX bytes:44360402 (44.3 MB) Interrupt:20 Memory:fe400000-fe420000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:239632 errors:0 dropped:0 overruns:0 frame:0 TX packets:239632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50738052 (50.7 MB) TX bytes:50738052 (50.7 MB) tap0 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17821 errors:0 dropped:0 overruns:0 frame:0 TX packets:13703 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2370468 (2.3 MB) TX bytes:1782356 (1.7 MB) br0 is connected to a real network, br1 is used to create a private network shared between guest systems. Now I need to configure iptables for network access. First of all I allow ssh sessions on port 8022 on the host system, then I allow all connections in state RELATED, ESTABLISHED. This is working ok. I install another system as guest, it's IP address is 192.168.10.2, and now I have two problems: I want to allow the access from this host to the outside world, cannot accomplish this. I can ssh from the host. I want to be able to ssh to the guest from the outside world using 8023 port. Cannot accomplish this. Full iptables configuration is following: ebadmin@saturn:/var/log$ sudo iptables --list [sudo] password for ebadmin: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:8022 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere LOG level warning Chain FORWARD (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning Chain OUTPUT (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning ebadmin@saturn:/var/log$ sudo iptables -t nat --list Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp spt:8023 to:192.168.10.2:22 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination The worst of all is that I don't know how to interpret iptables logs. I don't see the final decision of the firewall. Need help urgently.

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • Disabling Minimize and Maximize buttons in a WPF Window

    - by marianor
    In WPF there is no possibility to control when the Minimize and Maximize buttons are disabled when the WindowStyle is SingleBorderWindow or ThreeDBorderWindow . In Windows Forms there are some properties like ControlBox , MinimizeBox and MaximizeBox that allow to do that. Because the WPF window internally has a hWnd we can do this using Windows API ( GetWindowLong and SetWindowLong will do the trick). I did three attached properties applicable to Window that use the internal API, in order to disable...(read more)

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Parallelism in .NET – Part 10, Cancellation in PLINQ and the Parallel class

    - by Reed
    Many routines are parallelized because they are long running processes.  When writing an algorithm that will run for a long period of time, its typically a good practice to allow that routine to be cancelled.  I previously discussed terminating a parallel loop from within, but have not demonstrated how a routine can be cancelled from the caller’s perspective.  Cancellation in PLINQ and the Task Parallel Library is handled through a new, unified cooperative cancellation model introduced with .NET 4.0. Cancellation in .NET 4 is based around a new, lightweight struct called CancellationToken.  A CancellationToken is a small, thread-safe value type which is generated via a CancellationTokenSource.  There are many goals which led to this design.  For our purposes, we will focus on a couple of specific design decisions: Cancellation is cooperative.  A calling method can request a cancellation, but it’s up to the processing routine to terminate – it is not forced. Cancellation is consistent.  A single method call requests a cancellation on every copied CancellationToken in the routine. Let’s begin by looking at how we can cancel a PLINQ query.  Supposed we wanted to provide the option to cancel our query from Part 6: double min = collection .AsParallel() .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would rewrite this to allow for cancellation by adding a call to ParallelEnumerable.WithCancellation as follows: var cts = new CancellationTokenSource(); // Pass cts here to a routine that could, // in parallel, request a cancellation try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation()); } catch (OperationCanceledException e) { // Query was cancelled before it finished } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, if the user calls cts.Cancel() before the PLINQ query completes, the query will stop processing, and an OperationCanceledException will be raised.  Be aware, however, that cancellation will not be instantaneous.  When cts.Cancel() is called, the query will only stop after the current item.PerformComputation() elements all finish processing.  cts.Cancel() will prevent PLINQ from scheduling a new task for a new element, but will not stop items which are currently being processed.  This goes back to the first goal I mentioned – Cancellation is cooperative.  Here, we’re requesting the cancellation, but it’s up to PLINQ to terminate. If we wanted to allow cancellation to occur within our routine, we would need to change our routine to accept a CancellationToken, and modify it to handle this specific case: public void PerformComputation(CancellationToken token) { for (int i=0; i<this.iterations; ++i) { // Add a check to see if we've been canceled // If a cancel was requested, we'll throw here token.ThrowIfCancellationRequested(); // Do our processing now this.RunIteration(i); } } With this overload of PerformComputation, each internal iteration checks to see if a cancellation request was made, and will throw an OperationCanceledException at that point, instead of waiting until the method returns.  This is good, since it allows us, as developers, to plan for cancellation, and terminate our routine in a clean, safe state. This is handled by changing our PLINQ query to: try { double min = collection .AsParallel() .WithCancellation(cts.Token) .Min(item => item.PerformComputation(cts.Token)); } catch (OperationCanceledException e) { // Query was cancelled before it finished } PLINQ is very good about handling this exception, as well.  There is a very good chance that multiple items will raise this exception, since the entire purpose of PLINQ is to have multiple items be processed concurrently.  PLINQ will take all of the OperationCanceledException instances raised within these methods, and merge them into a single OperationCanceledException in the call stack.  This is done internally because we added the call to ParallelEnumerable.WithCancellation. If, however, a different exception is raised by any of the elements, the OperationCanceledException as well as the other Exception will be merged into a single AggregateException. The Task Parallel Library uses the same cancellation model, as well.  Here, we supply our CancellationToken as part of the configuration.  The ParallelOptions class contains a property for the CancellationToken.  This allows us to cancel a Parallel.For or Parallel.ForEach routine in a very similar manner to our PLINQ query.  As an example, we could rewrite our Parallel.ForEach loop from Part 2 to support cancellation by changing it to: try { var cts = new CancellationTokenSource(); var options = new ParallelOptions() { CancellationToken = cts.Token }; Parallel.ForEach(customers, options, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // Check for cancellation here options.CancellationToken.ThrowIfCancellationRequested(); // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); } catch (OperationCanceledException e) { // The loop was cancelled } Notice that here we use the same approach taken in PLINQ.  The Task Parallel Library will automatically handle our cancellation in the same manner as PLINQ, providing a clean, unified model for cancellation of any parallel routine.  The TPL performs the same aggregation of the cancellation exceptions as PLINQ, as well, which is why a single exception handler for OperationCanceledException will cleanly handle this scenario.  This works because we’re using the same CancellationToken provided in the ParallelOptions.  If a different exception was thrown by one thread, or a CancellationToken from a different CancellationTokenSource was used to raise our exception, we would instead receive all of our individual exceptions merged into one AggregateException.

    Read the article

  • Is it possible to overlay EditText box on a GLSurfaceView on Android?

    - by Ash McConnell
    I am trying to add a "PlayerName" box on top of a opengl menu background, is this possible? I've tried various layouts, but they don't seem to allow an EditText box to appear on top What is the typical way of doing something like this? Do I need to manually render the text and handle input or is there a better way? It seems like it should be possible to show the EditText on top of the GLSurfaceView somehow.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Scott Guthrie in Glasgow

    - by Martin Hinshelwood
    Last week Scott Guthrie was in Glasgow for his new Guathon tour, which was a roaring success. Scott did talks on the new features in Visual Studio 2010, Silverlight 4, ASP.NET MVC 2 and Windows Phone 7. Scott talked from 10am till 4pm, so this can only contain what I remember and I am sure lots of things he discussed just went in one ear and out another, however I have tried to capture at least all of my Ohh’s and Ahh’s. Visual Studio 2010 Right now you can download and install Visual Studio 2010 Candidate Release, but soon we will have the final product in our hands. With it there are some amazing improvements, and not just in the IDE. New versions of VB and C# come out of the box as well as Silverlight 4 and SharePoint 2010 integration. The new Intellisense features allow inline support for Types and Dictionaries as well as being able to type just part of a name and have the list filter accordingly. Even better, and my personal favourite is one that Scott did not mention, and that is that it is not case sensitive so I can actually find things in C# with its reasonless case sensitivity (Scott, can we please have an option to turn that off.) Another nice feature is the Routing engine that was created for ASP.NET MVC is now available for WebForms which is good news for all those that just imported the MVC DLL’s to get at it anyway. Another fantastic feature that will need some exploring is the ability to add validation rules to your entities and have them validated automatically on the front end. This removes the need to add your own validators and means that you can control an objects validation rules from a single location, the object. A simple command “GridView.EnableDynamicData(gettype(product))“ will enable this feature on controls. What was not clear was wither there would be support for this in WPF and WinForms as well. If there is, we can write our validation rules once and use everywhere. I was disappointed to here that there would be no inbuilt support for the Dynamic Language Runtime (DLR) with VS2010, but I think it will be there for .vNext. Because I have been concentrating on the Visual Studio ALM enhancements to VS2010 I found this section invaluable as I now know at least some of what I missed. Silverlight 4 I am not a big fan of Silverlight. There I said it, and I will probably get lynched for it. My big problem with Silverlight is that most of the really useful things I leaned from WPF do not work. I am only going to mention one thing and that is “x:Type”. If you are a WPF developer you will know how much power these 6 little letters provide; the ability to target templates at object types being the the most magical and useful. But, and this is a massive but, if you are developing applications that MUST run on platforms other than windows then Silverlight is your only choice (well that and Flash, but lets just not go there). And Silverlight has a huge install base as well.. 60% of all internet connected devices have Silverlight. Can Adobe say that? Even though I am not a fan of it my current project is a Silverlight one. If you start your XAML experience with Silverlight you will not be disappointed and neither will the users of the applications you build. Scott showed us a fantastic application called “Silverface” that is a Silverlight 4 Out of Browser application. I have looked for a link and can’t find one, but true to form, here is a fantastic WPF version called Fish Bowl from Microsoft. ASP.NET MVC 2 ASP.NET MVC is something I have played with but never used in anger. It is definitely the way forward, but WebForms is not dead yet. there are still circumstances when WebForms are better. If you are starting from greenfield and you are using TDD, then MVC is ultimately the only way you can go. New in version 2 are Dynamic Scaffolding helpers that let you control how data is presented in the UI from the Entities. Adding validation rules and other options that make sense there can help improve the overall ease of developing the UI. Also the Microsoft team have heard the cries of help from the larger site builders and provided “Areas” which allow a level of categorisation to your Controllers and Views. These work just like add-ins and have their own folder, but also have sub Controllers and Views. Areas are totally pluggable and can be dropped onto existing sites giving the ability to have boxed products in MVC, although what you do with all of those views is anyone's guess. They have been listening to everyone again with the new option to encapsulate UI using the Html.Action or Html.ActionRender. This uses the existing  .ascx functionality in ASP.NET to render partial views to the screen in certain areas. While this was possible before, it makes the method official thereby opening it up to the masses and making it a standard. At the end of the session Scott pulled out some IIS goodies including the IIS SEO Toolkit which can be used to verify your own site is “good” for search engine consumption. Better yet he suggested that you run it against your friends sites and shame them with how bad they are. note: make sure you have fixed yours first. Windows Phone 7 Series I had already seen the new UI for WP7 and heard about the developer story, but Scott brought that home by building a twitter application in about 20 minutes using the emulator. Scott’s only mistake was loading @plip’s tweets into the app… And guess what, it was written in Silverlight. When Windows Phone 7 launches you will be able to use about 90% of the codebase of your existing Silverlight application and use it on the phone! There are two downsides to the new WP7 architecture: No, your existing application WILL NOT work without being converted to either a Silverlight or XNA UI. NO, you will not be able to get your applications onto the phone any other way but through the Marketplace. Do I think these are problems? No, not even slightly. This phone is aimed at consumers who have probably never tried to install an application directly onto a device. There will be support for enterprise apps in the future, but for now enterprises should stay on Windows Phone 6.5.x devices. Post Event drinks At the after event drinks gathering Scott was checking out my HTC HD2 (released to the US this month on T-Mobile) and liked the Windows Phone 6.5.5 build I have on it. We discussed why Microsoft were not going to allow Windows Phone 7 Series onto it with my understanding being that it had 5 buttons and not 3, while Scott was sure that there was more to it from a hardware standpoint. I think he is right, and although the HTC HD2 has a DX9 compatible processor, it was never built with WP7 in mind. However, as if by magic Saturday brought fantastic news for all those that have already bought an HD2: Yes, this appears to be Windows Phone 7 running on a HTC HD2. The HD2 itself won't be getting an official upgrade to Windows Phone 7 Series, so all eyes are on the ROM chefs at the moment. The rather massive photos have been posted by Tom Codon on HTCPedia and they've apparently got WiFi, GPS, Bluetooth and other bits working. The ROM isn't online yet but according to the post there's a beta version coming soon. Leigh Geary - http://www.coolsmartphone.com/news5648.html  What was Scott working on on his flight back to the US?   Technorati Tags: VS2010,MVC2,WP7S,WP7 Follow: @CAMURPHY, @ColinMackay, @plip and of course @ScottGu

    Read the article

  • JMX Based Monitoring - Part Two - JVM Monitoring

    - by Anthony Shorten
    This the second article in the series focussing on the JMX based monitoring capabilities possible with the Oracle Utilities Application Framework. In all versions of the Oracle utilities Application Framework, it is possible to use the basic JMX based monitoring available with the Java Virtual Machine to provide basic statistics ablut the JVM. In Java 5 and above, the JVM automatically allowed local monitoring of the JVM statistics from an approporiate console. When I say local I mean the monitoring tool must be executed from the same machine (and in some cases the same user that is running the JVM) to connect to the JVM directly. If you are using jconsole, for example, then you must have access to a GUI (X-Windows or Windows) to display the jconsole output. This is the easist way of monitoring without doing too much configration but is not always practical. Java offers a remote monitorig capability to allow yo to connect to a remotely executing JVM from a console (like jconsole). To use this facility additional JVM options must be added to the command line that started the JVM. Details of the additional options for the version of the Java you are running is located at the JMX information site. Typically to remotely connect to a running JVM that JVM must be configured with the following categories of options: JMX Port - The JVM must allow connections on a listening port specified on the command line Connection security - The connection to the JVM can be secured. This is recommended as JMX is not just a monitoring protocol it is a managemet protocol. It is possible to change values in a running JVM using JMX and there are NO "Are you sure?" safeguards. For a Oracle Utilities Application Framework based application there are a few guidelines when configuring and using this JMX based remote monitoring of the JVM's: Online JVM - The JVM used to run the online system is embedded within the J2EE Web Application Server. To enable JMX monitoring on this JVM you can either change the startup script that starts the Web Application Server or check whether your J2EE Web Application natively supports JVM statistics collection. Child JVM's (COBOL only) - The Child JVM's should not be monitored using this method as they are recycled regularly by the configuration and therefore statistics collected are of little value. Batch Threadpoools - Batch already has a JMX interface (which will be covered in another article). Additional monitoring can be enabled but the base supported monitoring is sufficient for most needs. If you are an Oracle Utilities Application Framework site, then you can specify the additional options for JMX Java monitoring on the OPTS paramaters supported for each component of the architecture. Just ensure the port numbers used are unique for each JVM running on any machine.

    Read the article

  • Analyze your IIS Log Files - Favorite Log Parser Queries

    - by The Official Microsoft IIS Site
    The other day I was asked if I knew about a tool that would allow users to easily analyze the IIS Log Files, to process and look for specific data that could easily be automated. My recommendation was that if they were comfortable with using a SQL-like language that they should use Log Parser . Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you...(read more)

    Read the article

  • dasBlog

    - by Daniel Moth
    Some people like blogging on a site that is completely managed by someone else (e.g. http://wordpress.com/) and others, like me, prefer hosting their own blog at their own domain. In the latter case you need to decide what blog engine to install on your web space to power your blog. There are many free blog engines to choose from (e.g. the one from http://wordpress.org/). If, like me, you want to use a blog engine that is based on the .NET platform you have many choices including BlogEngine.NET, Subtext and the one I picked: dasBlog. In this post I'll describe the steps I took to get going with the open source dasBlog (home page, source page). A. Installing First I installed dasBlog on my local Windows 7 machine where I have IIS7 installed. To install dasBlog, I started by clicking the "Install" button on its web gallery page. After that I went through configuration, theming and adding content as described below. Once I was happy that everything was working correctly on the local machine, I set this up on a hosting service. I went for a Windows IIS7 shared hosting 3 month Economy plan from GoDaddy. The dasBlog site lists a bunch of other hosts. You can read the installation instructions for dasBlog, and with GoDaddy I just had to click one button since it is available as part of their quick-install apps. With GoDaddy I had a previewdns option that allowed me to play around and preview my site before going live. B. Configuring After it was installed (on local machine and/or hosting provider), I followed the obvious steps to create an admin user and logged in. This displays an admin navigation bar with the following options: 1. Navigator Links: I decided I was not going to use this feature. I manage links on the side of my blog manually elsewhere as part of the theme. So, I deleted every entry on this page and ignored it thereafter. 2. Blogroll: Ditto - same comment as for Navigator Links. 3. Content Filters: I did not delete (or add) these, but I did ensure both checkboxes are not checked. I.e. I am not using this feature now, but I may return to it in the future. 4. Activity: This is a read-only view of various statistics. So nothing to configure here, but useful to come back to for complementary statistics to whatever other statistical package you use (e.g. free stats as part of the hosting and I also use feedburner for syndication stats). 5. Cross-posting: I did not need that, so I turned it off via the Configuration Settings discussed next. 6. Configuration Settings: This is where the bulk of the configuration for the blog takes place and they are stored in a single XML file: Site.Config file. There are truly self-explanatory options to pick for Basic Settings, Services Settings and Services to Ping, Syndication Settings (this is where you link to your feedburner name if you have one) and Mail to Weblog Settings (I keep this turned off). There are also "Xml Storage System Settings" (I keep this turned off), "OpenId Settings" (I allow OpenID commenters), "Spammer Settings" (Enable captcha, never show email addresses) and "Comment settings" (Enable comments, don't allow on older posts, don't allow html). There are also Appearance Settings (I checked the "Use Post Title for Permalink", replaced spaces with hyphen and unchecked the "Use Unique Title"). Finally, there are also Notification Settings, but they are a bit of hit and miss in my case, in that I don’t always get the emails (still investigating this). C. Adding Content You can add content via the "Add Entry" link on the admin navigation bar or by configuring the "Mail to Weblog" settings and sending email or, do what I've started doing, use Live Writer (also the team has a blog). Another way to add content is programmatically if, for example, you are migrating content from another blog (and I'll cover that in separate post sharing the code). What you should know is that all blog content (posts and comments) live in XML files in a folder called "content" under your dasBlog installation. D. Theming There is a very good guide about themes for dasBlog, there is also a similar guide with screenshots (scroll down to "So how do I create a theme") and the dasBlog macro reference. When you install dasBlog, there are many themes available; each theme is in its own folder (representing the folder name) under the themes folder. You may have noticed that you can switch between these via the "Appearance Settings" described above (look for the combobox after the Default Theme label). I created my own theme by copy-pasting an existing theme folder, renaming it and then switching to it as the default. I then opened the folder in Visual Studio and hacked around the HTML in the 3 files (itemTemplate, homeTemplate and dayTemplate). These files have a blogtemplate file extension, which I temporarily renamed to HTML as I was editing them. There is no more advice I can offer here as this is a matter of taste and the aforementioned links is all I used. Personally, I had salvaged the CSS (and structure) from my previous blog and wanted to make this one match it as closely as possible - I think I have succeeded. E. If you run into any issue with dasBlog... ...use your favorite search engine to find answers. Many bloggers have been using this engine for a while and have documented issues and workarounds over time. One such example is ScottHa's dasBlog category; another example is therightstuff where I "borrowed" the idea/macro for the outlook-style on-page navigation. If you don't find what you want through searching, try posting a question to the forums. Comments about this post welcome at the original blog.

    Read the article

  • The Partner Perspective from Oracle OpenWorld 2012 - IDC’s Darren Bibby report

    - by Richard Lefebvre
    Below is IDC’s Darren Bibby report on ‘The Partner Perspective from Oracle OpenWorld 2012’. If you missed the 2012 edition, I trust this will give you the willingness to attend next year one! October 26, 2012 I attended my fourth Oracle OpenWorld earlier in October. I always go in with the lens of, "What's in it for partners this year?" Although it's primarily thought of as a customer event - and yes, the bulk of the almost 50,000 attendees are customers - this year's conference was clearly the largest and most important partner event Oracle has ever run. Oracle PartnerNetwork (OPN) Exchange There were more partner attendees than ever, with Oracle citing somewhere around 5000. But the format for partners this year was different. And it was better. Traditionally, Oracle hosts a one-day only Partner Forum on the Sunday before the customer-focused conference begins. This year, the partner content still began on the Sunday, but the worldwide alliances and channels group created an exclusive track throughout the week, just for partners. It featured content specifically targeted towards partners, and was anchored at a nearby hotel. This was a great move for Oracle. The Oracle PartnerNetwork (OPN) team has been in a tricky position for years in that they have enough partners that they need a landmark event in the year, but perhaps not enough to justify a separate, worldwide, large, partner-only event. Coinciding a four day event with Oracle OpenWorld, where anybody who's anybody in the Oracle world attends anyway, is a good solution. The channels leadership team can build from this success for an even better conference next year. It's expected that they will follow a similar strategy. Cloud Announcements for Partners As for the content, it was primarily about the Cloud. For customers, for VARs, for ISVs, for everyone. There were five key Cloud related announcements for partners at the event: Cloud Builder Specialization. This is one of the first broader Specializations that isn't focused on one unique product. It is a designation for partners that offer design and implementation services for private cloud solutions. As such, it will surely be something that nearly every partner will consider, and many will pursue. New Specializations for Cloud Services. Unlike the broad, almost "strategy-level" Specialization above, there are a group of new product-based "merit badges" for many of the new Cloud offerings. Think about a Specialization for the Cloud version of HCM, for instance. Each of these particular specializations will also have Rapid Start implementation methodologies that allow a partner to offer a fixed scope and fixed price bid to customers. Based on the learnings from Oracle Consulting, this means a partner might be able to deliver Cloud HCM in six weeks for a fixed price. In the end, this means more consistent experiences for Oracle customers. Cloud Resale Program. For those partners who achieve one of these Cloud Specializations, it will mean they can actually resell the subscription-based Cloud product. This is important because it has been somewhat of a rarity in the emerging Cloud channel for partners to be able to "take the paper", take the revenue, do the billing, be first line of support etc. This is an important step for Oracle and one the partners will be happy to see. Cloud Referral Program. For those partners who are not as engaged with these specific Cloud products that the Specializations revolve around, there is a new referral program that provides an incentive to recommend Oracle Cloud products. This one-two punch of referral and resale programs is similar in many ways to other vendors who allow more committed partners to resell, while more casual partners can collect fees. It's the model that seems to work. The key to allow a company to resell a subscription product - something that is inherently delivered directly between the vendor and customer - is trust. Achieving a specialization is a good bar to have to meet. Platform as a Service for ISVs. Leveraging some of the overall announcements made by CEO Larry Ellison around a cloud version of its famous database, Oracle also outlined a new ability for ISVs to build cloud services on its new PaaS offering. Details were less available for this announcement, though it's an expected and fitting play for ISVs comfortable with Oracle technology who can now more easily build out cloud applications. There wasn't much talk of an app store to go along with this, but surely it's in the works. Specializations And "The Gap" Coming back to Specializations, Oracle PartnerNetwork (OPN) has 4600 partners worldwide that hold 20,000 Specializations. These are impressive numbers just three years into the new OPN framework. The actual number of Specializations has also grown significantly, up to 111 today and soon around 125 or so with the new Cloud designations. Oracle may need to look at grouping some of these and creating higher level, broader designations that partners could achieve by earning several Specializations in that group. At 125 and growing, this is a lot. On the top of the pyramid, Hitachi Ltd. successfully became the eleventh partner to make it to the highly prestigious Diamond level. Partner programs partially exist in order to recognize capable partners. And it's more than abundantly clear that the Diamond level does this. But I think Oracle has a gap. Specializations show capability in a very specific product area, and all sizes of partners can achieve these. The next level at which to show a level of expertise is the Advanced Specialization. However, this is a massive step up from the regular Specialization. The advanced level requires 50 people to have certification in that particular product area. Most other industry programs have similar higher level statuses, but none are even close to that number. Whereas a customer who sees an Oracle partner with an advanced specialization can be very sure of capability, there is a gap in that there are hundreds or even thousands of 20-50 person solution providers who are top notch in their area of expertise. They will never get to Advanced due to numbers alone. These boutique partners don't really have a way of showing off their talents in the current program. Advanced may not need to be so high to really show that a company has deep expertise. Overall it was a very successful Oracle OpenWorld for Oracle partners of all sizes. There was progress made on making it a bigger and more relevant event. And also on catching up and maybe even leading in some cases with cloud opportunities for partners.

    Read the article

  • LINQ to SQL Profiler

    In this article we will be taking a look at the new LINQ to SQL Profiler from HibernatingRhinos. This tool gives you a view into the goings on of LINQ to SQL. Not only does it allow you to see the SQL that is generated by your LINQ queries but it also shows you information about your connections, queries, as well as alerting you to all sorts of information that you might otherwise not know about.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >