Search Results

Search found 9478 results on 380 pages for 'rails routing'.

Page 310/380 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Cannot start `Routing and Remote Access Service` and it's dependencies

    - by ahmadali shafiee
    I tried to start Routing and Remote Access Service but I've got an error says the dependency service or group failed to start then I tried to start Remote Access Connection Manager (one of RRAS's dependencies) and the error way same. then I tried to start The Secure Socket Tuning Protocol Service but there was an error says that the the service started then stopped! the errors form event log is here: The Remote Access Connection Manager service depends on the Secure Socket Tunneling Protocol Service service which failed to start because of the following error: The operation completed successfully. The Secure Socket Tunneling Protocol Service service entered the stopped state. The Routing and Remote Access service depends on the Remote Access Connection Manager service which failed to start because of the following error: The dependency service or group failed to start. sort by date Does anyone know how can I resolve the problem?

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • NIC bonding with two uplinks

    - by Karolis T.
    Is bonding the preferred way of implementing ISP redundancy? In the texts I've seen, bond device has a netmask, gateway of it's own. How can this be obtained if there are two different gateways from two uplinks, which one to choose? Do I need any special routing rules to go with it or does simply configuring separate interfaces (using Debian, /etc/network/interfaces), i.e eth1, eth2 for their corresponding uplinks and bonding them to bond0 handle routing automatically? If I want to NAT client machines, do they use bond device's IP as a gateway? Does the bond0 device is the device that goes into iptables nat rules? Thanks

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • Dialing into multiple PPP connections on Ubuntu

    - by sharjeel
    I have multiple 3G USB based Modems. I would like them to keep connected simultaneously, NOT necessarily aggregating their bandwidth; a separate intelligent application would manage their utilization effectively. However I am running into problem of setting up proper routes for the ppp0,ppp1 interfaces: when one of them connects, other's entries in the routing table get updated so it is no more usable. If I reconnect the second one, it would override the first one's routing entries. If I do it over and over, sometimes both of them's entries disappear while in rare cases the two work well. I have tried it both using NetworkManager as well as WVDial but issue pops up in both of these. Perhaps both of them use same PPP dialer at the backend and thats why this issue appears. What is the proper solution to make them work together? In the long run, I'd also like them to automatically dial in once USB gets connected.

    Read the article

  • port redirection on solaris 11

    - by mo3lyana
    I'm trying port forwarding on solaris 11. I have a mechine behind a server that use solaris 11. I try to access that mechine from the external port, and forwarded by solaris 11 mechine to that machine using the ip filter. I give ipnat.conf configuration like this: rdr net0 0.0.0.0/0 port 1428 -> 10.1.18.178 port 22 but the response appeared when I tried to remote is connection time out, but if I redirect to a solaris 11 machine itself, the configuration is running well. I've enabled IP forwarding on the system root@solaris11:/etc/ipf# ndd -get /dev/ip ip_forwarding 1 root@solaris11:/etc/ipf# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing enabled enabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled root@solaris11:/etc/ipf# ipadm show-prop PROTO PROPERTY PERM CURRENT PERSISTENT DEFAULT POSSIBLE ipv4 forwarding rw on on off on,off is there any configuration that I missed?

    Read the article

  • How does IPv4 Subnetting Work?

    - by Kyle Brandt
    This is a Canonical Question about IPv4 Subnets How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Server Fault gets lots of subnetting homework questions, so we could use an answer to point them to on Server Fault itself. What is classless routing and why is class-based routing obsolete? If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this? What is NAT (Network Address Translation).

    Read the article

  • Deliver email to Gmail AND Office 365?

    - by gbegley
    We moved our Office app hosting from Google Apps to Office 365. Many of us miss Google Apps, especially its superior search functionality. The pressure to use Office 365 has disappeared; many (but not all) of us would like to go back to Google Apps. Is it possible to configure our domain's mail delivery so that messages are delivered to both Google Apps's Gmail and Office 365, allowing users to choose which platform they prefer? If so, what are the options? Google Apps documentation specifies the ability to deliver messages to a secondary mail server using routing configuration. Currently our MX records are point to Office 365. If I change the MX records to point to Google Apps Mails servers, is the "Office 365 MX record address" the address I would want to use for a Google Apps Routing Target?

    Read the article

  • OpenVPN Configuration - Windows 7 client & debian server

    - by Guillaume
    I recently formatted my Windows 7 computer and lost my client's config files for OpenVPN. I recovered the certificates and default config that were left on the server but I haven't managed to make the whole thing work again. I assume the server's config and routing table are OK because it was working before (although quite some time ago). Would any of you experts be able to help? server.conf # Serveur TCP/666 mode server proto udp port 666 dev tun # Cles et certificats ca ca.crt cert server.crt key server.key dh dh1024.pem tls-auth ta.key 0 cipher AES-256-CBC # Reseau server 10.8.0.0 255.255.255.0 #push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" push "redirect-gateway def1" keepalive 10 120 # Securite user nobody group nogroup chroot /etc/openvpn/jail persist-key persist-tun comp-lzo # Log verb 3 mute 20 status openvpn-status.log log-append /var/log/openvpn.log client.conf # Client client dev tun proto udp remote *my server's ip address*:666 cipher AES-256-CBC # Cles ca ca.crt cert client1.crt key client1.key tls-auth ta.key 1 # Securite nobind persist-key persist-tun comp-lzo verb 3 Routing table on debian server when OpenVPN server is running: Destination Gateway Genmask Indic Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 my server's ip * 255.255.255.0 U 0 0 0 eth0 default 72815.trg.dedic 0.0.0.0 UG 0 0 0 eth0 Routing table on Windows 7 client (OpenVPN not working) =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: [...] =========================================================================== Persistent Routes: None And when the link is established between my client and the server: The server's routing table stays the same. The client's becomes: =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 my server's ip 255.255.255.255 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Persistent Routes: None What's working: Server and client do connect to each other, SSL certificates are OK. The client gets an IP (10.8.0.6) from the server OpenVPN client is started as an administrator. But: I cannot ping the other one on either side. 'Gateway' value is empty on client's side (in the adapter's "status" window). Client has got no internet access when the link is up. Ideal configuration: I only want the client to be able to use the server's Internet access and access its resources (MySQL server in particular). I do not need or want the server to access the client's local network. The client needs to be able to access it's local network, although all Internet traffic should be redirected to the VPN link. I spent a considerable amount of time on this but it's still not working, any help would be much appreciated. Thanks :)

    Read the article

  • Port forwarding with DNAT and SNAT without touching other packets

    - by w00t
    I have a Linux gateway with iptables which does routing and port forwarding. I want the port forwarding to happen independent of the routing. To port forward, I add this to the nat table: iptables -t nat -A "$PRE" -p tcp -d $GW --dport $fromPort -j DNAT --to-destination $toHost:$toPort iptables -t nat -A "$POST" -p tcp -d $toHost --dport $toPort -j SNAT --to $SRC $PRE and POST are actually destination-specific chains that I jump to from the PREROUTING and POSTROUTING chains respectively so I can keep the iptables clean. $SRC is the IP address I'm SNATing to which is different from the gateway IP $GW. The problem with this setup is that regular routed packets that were not DNATed but happen to go to the same $toHost:$toPort combo will also be SNATed. I wish to avoid this. Any clever things I can do?

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • Windows Server 2003: Nat Port Forwarding Not Working

    - by jM2.me
    The setup is following: Internet (108.99.XXX.XX) <- Windows Server 2003 (10.0.0.1) <- Switch <- Office Computers (10.0.0.100-200 some static routes some manual some automatic) Windows Server has NAT installed on it and two network interfaces are configured properly. The problem is, whenever I try to forward port 80 (or any other) from office computer (lets say 10.0.0.100), it fails. Nic #1 Settings: All settings are obtained from ISP Nic #2 Settings: Set manually IP: 10.0.0.1 Mask: 255.255.255.0 Nat Server is configured to automatically assign IP addresses to private network. Settings are: IP: 10.0.0.0 Mask: 255.255.255.0 Forwarding was done in Routing and Remote Access (local) - IP Routing - NAT/Basic Firewall - Local Area Connection (right click_properties) - services and ports - Web Server HTTP - Private Address: 10.0.0.100 SO what is causing the problem of failure to forward any port from other computer inside private network?

    Read the article

  • How to set up VLAN network

    - by Paddington
    I'm changing my network from having every device on flat network to using VLans. My problem is that we already have a lot of devices on this network(192.168.20.0/24). From theory, I read that each Vlan has to be a different subnet and then I need to configure virtual interfaces on my Cisco router to cater for inter vlan routing. 1) How can I segment this network with minimum down time on the devices already on the network? 2) Can I just create Vlans and leave all these Vlans in the same layer 3 network so that they can go out of the network (I am not too concerned about inter-Vlan routing) or I have to create subnets which means reconfiguring the existing devices (something I do not want).

    Read the article

  • Windows Server VPN: Error 720

    - by Nikita Zernov
    I want to created vpn server on Windows Server 2012. First I installed Active Directory domain services, then Remote Access server role. Opened Gettng Started Wizard, entered configure just vpn. Then in Routing and Remote access selected Configure and Enable Routing and Remote Access. There selected custom configuration, vpn. Then created user in active directory and allowed network access permission. After this I tried to connect to vpn from windows 8. I get the following error: Error 720: A connection t the remote computer could not be established. You might need to change the network settings for this connection. What should I do?

    Read the article

  • RedirectToAction help: or better suggestion

    - by Dean Lunz
    Still getting my feet wet with asp.net mvc. I have a working Action and httppost action but I want to replace the "iffy" code with a RedirectToAction call because the code is rather large for what it does. A call using RedirectToAction would clean it up more. Every way I've tried it fails to work for me in that the drop down list fails to have the proper item selected. The code below works fine but calling RedirectToAction the was I have been does not work for me. So how can i rework the code below to use RedirectToAction ? I find this line of code particularly troubling because there is no garentee that the "this.Url.RequestContext.RouteData.Route" property will be of type "System.Web.Routing.Route". // get url request var urlValue = "/" + ((System.Web.Routing.Route)(this.Url.RequestContext.RouteData.Route)).Url; I also find the second piece of code rather bloated ... // build the url template urlValue = urlValue.Replace("{realm}", realm); urlValue = urlValue.Replace("{guild}", guild); urlValue = urlValue.Replace("{date}", date.ToShortDateString().Replace("/", "-")); urlValue = urlValue.Replace("{pageIndex}", pageIndex.ToString()); urlValue = urlValue.Replace("{itemCount}", itemCountToDisplay.ToString()); The route I have setup is routes.MapRoute( "GuildOverview Realm", // Route name "GuildMembers/{realm}/{guild}/{date}/{pageIndex}/{itemCount}", // URL with parameters new { controller = "GuildMembers", action = "Index" }); // Parameter defaults The code for my controller actions is below ... [HttpPost] public ActionResult Index(string realm, string guild, DateTime date, int pageIndex, int itemCount, FormCollection formCollection) { // get form data if it's there and try parse num items to display var cnt = this.Request.Form["ddlDisplayCount"]; int itemCountToDisplay = 10; if (!string.IsNullOrEmpty(cnt)) int.TryParse(cnt, out itemCountToDisplay); // get url request var urlValue = "/" + ((System.Web.Routing.Route)(this.Url.RequestContext.RouteData.Route)).Url; // build the url template urlValue = urlValue.Replace("{realm}", realm); urlValue = urlValue.Replace("{guild}", guild); urlValue = urlValue.Replace("{date}", date.ToShortDateString().Replace("/", "-")); urlValue = urlValue.Replace("{pageIndex}", pageIndex.ToString()); urlValue = urlValue.Replace("{itemCount}", itemCountToDisplay.ToString()); return this.Redirect(urlValue); } public ActionResult Index(string realm, string guild, DateTime date, int pageIndex, int itemCount) { // get the page index ViewData["pageIndex"] = pageIndex; // validate item count var pageItemCountItems = new[] { 10, 20, 50, 100 }; if (!pageItemCountItems.Contains(itemCount)) itemCount = pageItemCountItems[0]; // calc the number of pages there are var numPages = (this._repository.GetGuildMemberCount(date, realm, guild) / itemCount) + 1; this.ViewData["pageCount"] = numPages; // get url request var urlValue = "/" + ((System.Web.Routing.Route)(this.Url.RequestContext.RouteData.Route)).Url; // build the url template urlValue = urlValue.Replace("{realm}", realm); urlValue = urlValue.Replace("{guild}", guild); urlValue = urlValue.Replace("{date}", date.ToShortDateString().Replace("/", "-")); urlValue = urlValue.Replace("{pageIndex}", "{0}"); urlValue = urlValue.Replace("{itemCount}", itemCount.ToString()); // set url template ViewData["UrlTemplate"] = urlValue; // set list of items for the display count dropdown var itemCounts = new SelectList(pageItemCountItems, itemCount); ViewData["DisplayCount"] = itemCounts; return View(_repository.GetGuildCharacters(date, realm, guild, (pageIndex - 1) * itemCount, itemCount)); } and my Index view contains the fallowing <%=Html.SimplePager(int.Parse(ViewData["pageIndex"].ToString()), int.Parse(ViewData["pageCount"].ToString()), ViewData["urlTemplate"].ToString(), "nav-menu")%> <% using (Html.BeginForm()) { %> <%= Html.DropDownList("ddlDisplayCount", (SelectList)ViewData["DisplayCount"], new { onchange = "this.form.submit();" })%> <% }%>

    Read the article

  • Deploy ASP.Net MVC 2 Applicatiopn to Windows 2008 R2

    - by user325320
    Hi, I have a ASP.Net MVC 2 web site, which can be visited by http://localhost/Admin/ContentMgr/ in ASP.Net Development Server from Visual Studio 2010(RTM Retail). When I try to deploy the site to Windows 2008 R2 , IIS 7.5 , the url always return 404. First, my application pool is running on .Net 4.0, and Integration mode. Second, my IIS do have "HTTP ERROR" and "HTTP Redirection" features on And this is my web.config. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" defaultLanguage="c#" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <!-- <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> --> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > <remove name="UrlRoutingModule"/> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="MvcHttpHandler" /> <add name="MvcHttpHandler" preCondition="integratedMode" verb="*" path="*.mvc" type="System.Web.Mvc.MvcHttpHandler" /> <add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </handlers> <httpErrors errorMode="Detailed" /> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration>

    Read the article

  • Elevating Customer Experience through Enterprise Social Networking

    - by john.brunswick
    I am not sure about most people, but I really dislike automated call center routing systems. They are impersonal and convey a sense that the company I am dealing with does not see the value of providing customer service that increases positive perception of their brand. By the time I am connected with a live support representative I am actually more frustrated than before I originally dialed. Each time a company interacts with its customers or prospects there is an opportunity to enhance that relationship. Technical enablers like call center routing systems can be a double edged sword - providing process efficiencies, but removing the human context of some interactions that can build a lot of long term value and create substantial repeat business. Certain web systems, available through "chat with a representative" now links on some web sites, provide a quick and easy way to get in touch with someone and cut down on help desk calls, but miss the opportunity to deliver an even more personal experience to customers and prospects. As more and more users head to the web for self-service and product information, the quality of this interaction becomes critical to supporting a company's brand image and viability. It takes very little effort to go a step further and elevate customer experience, without adding significant cost through social enterprise software technologies. Enterprise Social Networking Social networking technologies have slowly gained footholds in the enterprise, evolving from something that people may have been simply curious about, to tools that have started to provide tangible value in the enterprise. Much like instant messaging, once considered a toy in the enterprise, expertise search, blogs as communications tools, wikis for tacit knowledge sharing are all seeing adoption in a way that is directly applicable to the business and quickly adding value. So where does social networking come in when trying to enhance customer experience?

    Read the article

  • No endpoint listening at.........

    - by Michael Stephenson
    I was having some very frustrating behaviour on our build server and while I found a number of articles online with similar error messages none of them helped me.  I thought I would just explain this here incase if helps me or anyone else in future.The error message we were getting is:There was no endpoint listening at http://localhostStubs.ExternalApplication/SampleService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more detailsOur scenario is as follows:We have a solution where a WCF service application hosting the WCF routing service is listening to the Windows Azure Service Bus Relay.  We have an acceptance test project in the solution which sends a message to the service bus which is then received by the WCF routing service and routed to SampleService.svc which is hosted in another IIS application on the same box.  A response is flowed back through to the test.  In the tests there are 5 scenarios simulating a successful message, and various error conditions.  On my developer machine it was working absolutely fine every time, and a clean build on my developer machine worked fine.  On the build server however one or more of the tests would fail each time with the above error message.  There didnt seem to be any pattern to which test would fail.The solution was building on a Windows 2008 R2 machine with IIS 7 and AppFabric Server installed with auto-start configured for the IIS Application which would be listening to service bus.After lots of searching online and looking at logs etc it turned out to be a simple solution to just restart the WAS service (Windows Process Activation Service) and the services it advised you to restart with it.  Hope this helps someone else

    Read the article

  • Hardening network with sysctl settings made Wi-fi downloading speed extremely slow

    - by Rohit Bansal
    I just followed up following steps to harden network security The /etc/sysctl.conf file contain all the sysctl settings. Prevent source routing of incoming packets and log malformed IP's enter the following in a terminal window: sudo vi /etc/sysctl.conf Edit the `/etc/sysctl.conf` file and un-comment or add the following lines : # IP Spoofing protection net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP broadcast requests net.ipv4.icmp_echo_ignore_broadcasts = 1 # Disable source packet routing net.ipv4.conf.all.accept_source_route = 0 net.ipv6.conf.all.accept_source_route = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv6.conf.default.accept_source_route = 0 # Ignore send redirects net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 # Block SYN attacks net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 5 # Log Martians net.ipv4.conf.all.log_martians = 1 net.ipv4.icmp_ignore_bogus_error_responses = 1 # Ignore ICMP redirects net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv6.conf.default.accept_redirects = 0 # Ignore Directed pings net.ipv4.icmp_echo_ignore_all = 1 To reload sysctl with the latest changes, enter: sudo sysctl -p But, after applying the changes I found "Wi-fi" downloading speed and terminal downloading speed extremely slow (less than 1KB/s) however surfing speed through browser was good. But, using direct ethernet cable was giving a good speed. Then, I reverted back the above changes and things fall back in line once again.... Could you please let me know what possibly in above script is affecting such behaviour [and why] ? How could I still maintain hardening of network security without disturbing Wi-fi downloading speed ?

    Read the article

  • Patterns & Practices: Composite Services CTP2 is Public

    - by HernanDL
    Finally the last CTP and pre-release version for the Composite Services is out. There were quite a lot of changes since CTP1. We added many new samples and many enhancements to the repository (DB) which is now called Inventory in sync with SOA Patterns. Here is a brief list of the main changes according to the included documentations.   Changes and additions in this release This CTP release contains reusable source code and samples to illustrate implementation for the following patterns and scenarios: Repair and Resubmit – this pattern is implemented in ESB Toolkit 2.0 as part of Exception Management Framework (EMF). This code drop provides code sample how to implement this pattern for Windows AppFabric workflow service, using Exceptions Web Service and workflow activities to create fault message, which will be created in EMF database.  Analytic Tracing – this code drop contains reusable code and samples for implementing ETW tracing: event collector service and database that store collected events. This capability may be used for scenarios that need flexibility on how collected events are decoded and processed via extensibility points you can configure and implement:  plugins and event decoders with leveraging ETW tracing capabilities provided by the event collector service.   Inventory Centralization – this code drop contains service catalog database, web services and samples to show how to implement Metadata Centralization, Schema Centralization and Policy Centralization patterns.  Service Virtualization – we included sample for implementing this pattern using WCF routing service( which is part of .NET framework) and service metadata centralization capabilities to define routing service metadata in service catalog. Termination Notification – we included sample for implementing this pattern using sample WCF service and policy centralization capabilities provided by this CTP release.   You will also find many new videos that will be uploaded to the home page any time soon. Stay tunned for new posts regarding implemetation details and advanced customizations for custom policy exporters/importers and monitoring.

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • what's the difference between Routed Events and Attached Events?

    - by vverma01
    I tried to find through various sources but still unable to understand difference between routed events and attached events in WPF. Most of the places of reference for attached event following example is used: <StackPanel Button.Click="StackPanel_Click"> <Button Content="Click Me!" Height="35" Width="150" Margin="5" /> </StackPanel> Explained as: stack panel do not contain Click event and hence Button.Click event is attached to Stack Panel. Where as msdn says: You can also name any event from any object that is accessible through the default namespace by using a typename.event partially qualified name; this syntax supports attaching handlers for routed events where the handler is intended to handle events routing from child elements, but the parent element does not also have that event in its members table. This syntax resembles an attached event syntax, but the event here is not a true attached event. Instead, you are referencing an event with a qualified name. According to MSDN information as pasted above, the above example of Buttons and StackPanel is actually a routed event example and not true attached event example. In case if above example is truly about usage of attached event (Button.Click="StackPanel_Click") then it's in contradiction to the information as provided at MSDN which says Another syntax usage that resembles typename.eventname attached event syntax but is not strictly speaking an attached event usage is when you attach handlers for routed events that are raised by child elements. You attach the handlers to a common parent, to take advantage of event routing, even though the common parent might not have the relevant routed event as a member. A similar question was raised in this Stack Overflow post, but unfortunately this question was closed before it could collect any response. Please help me to understand how attached events are different from routed events and also clarify the ambiguity as pointed above.

    Read the article

  • Using the West Wind Web Toolkit to set up AJAX and REST Services

    - by Rick Strahl
    I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit. Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are: Easy AJAX/JSON Callbacks Ability to return any kind of non JSON content (string, stream, byte[], images) Ability to work with both XML and JSON interchangeably for input/output Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need) Ability to create clean URLS with Routing Ability to use standard ASP.NET HTTP Stack for HTTP semantics It's all about options! In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit. Installing the Toolkit Assemblies The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet: West Wind Web and AJAX Utilities (Westwind.Web) and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.   When done you end up with your project looking like this: What just happened? Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js). Creating a new Service The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post. So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here. In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute. Here's the modified base handler implementation now looks like with an added HelloWorld method: using System; using Westwind.Web; namespace WestWindWebAjax { /// <summary> /// Handler implements CallbackHandler to provide REST/AJAX services /// </summary> public class SampleService : CallbackHandler { [CallbackMethod] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } } } Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services Urlbased Syntax Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this: http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON): (note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser) If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces: <string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string> Cleaner URLs with Routing Syntax If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create: protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); } With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition: [CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")] public string HelloWorld(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } The same URL I previously used now becomes a bit shorter and more readable with: http://localhost/WestWindWebAjax/HelloWorld/Rick It's an easy way to create cleaner URLs and still get the same functionality. Calling the Service with $.getJSON() Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page: <!DOCTYPE html> <html> <head> <link href="Css/Westwind.css" rel="stylesheet" type="text/css" /> <script src="scripts/jquery.min.js" type="text/javascript"></script> <script src="scripts/ww.jquery.min.js" type="text/javascript"></script> </head> <body> Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result: <fieldset> <legend>Hello World</legend> Please enter a name: <input type="text" name="txtHello" id="txtHello" value="" /> <input type="button" id="btnSayHello" value="Say Hello (POST)" /> <input type="button" id="btnSayHelloGet" value="Say Hello (GET)" /> <div id="divHelloMessage" class="errordisplay" style="display:none;width: 450px;" > </div> </fieldset> Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server. <script type="text/javascript"> $(document).ready(function () { $("#btnSayHelloGet").click(function () { $.getJSON("SampleService.ashx", { Method: "HelloWorld", name: $("#txtHello").val() }, function (result) { $("#divHelloMessage") .text(result) .fadeIn(1000); }); });</script> .getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery. The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser. AJAX Post Syntax - using ajaxCallMethod() The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used. The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result. Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method: [CallbackMethod(RouteUrl="stocks/{symbol}")] public StockQuote GetStockQuote(string symbol) { Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0))); StockServer server = new StockServer(); var quote = server.GetStockQuote(symbol); if (quote == null) throw new ApplicationException("Invalid Symbol passed."); return quote; } This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it: <fieldset> <legend>Single Stock Quote</legend> Please enter a stock symbol: <input type="text" name="txtSymbol" id="txtSymbol" value="msft" /> <input type="button" id="btnStockQuote" value="Get Quote" /> <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;"> <div class="label-left">Company:</div> <div id="stockCompany"></div> <div class="label-left">Last Price:</div> <div id="stockLastPrice"></div> <div class="label-left">Quote Time:</div> <div id="stockQuoteTime"></div> </div> </fieldset> The final result looks something like this:   Let's hook up the button handler to fire the request and fill in the data as shown: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").show().fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST")); }, onPageError); }); So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks.  The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements. The data that comes back over the wire is JSON and it looks like this: { "Symbol":"MSFT", "Company":"Microsoft Corpora", "OpenPrice":26.11, "LastPrice":26.01, "NetChange":0.02, "LastQuoteTime":"2011-11-03T02:00:00Z", "LastQuoteTimeString":"Nov. 11, 2011 4:20pm" } which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date. Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly. Errors and Exceptions So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this: if (quote == null) throw new ApplicationException("Invalid Symbol passed."); CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself: $("#btnStockQuote").click(function () { ajaxCallMethod("SampleService.ashx", "GetStockQuote", [$("#txtSymbol").val()], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); }, function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); }); The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism. Specifying HttpVerbs You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute: [CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)] public string HelloWorld(string name) { … } If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All). Passing in object Parameters Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock. Consider the following service method that receives an StockBuyOrder object as a parameter: [CallbackMethod] public string BuyStock(StockBuyOrder buyOrder) { var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } public class StockBuyOrder { public string Symbol { get; set; } public int Quantity { get; set; } public DateTime BuyOn { get; set; } public StockBuyOrder() { BuyOn = DateTime.Now; } } This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method. On the client side we now have a very simple form that captures the three values on a form: <fieldset> <legend>Post a Stock Buy Order</legend> Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp; Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10" style="width: 50px" />&nbsp;&nbsp; Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" /> <input type="button" id="btnBuyStock" value="Buy Stock" /> <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div> </fieldset> The completed form and demo then looks something like this:   The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this: $("#btnBuyStock").click(function () { // create an object map that matches StockBuyOrder signature var buyOrder = { Symbol: $("#txtBuySymbol").val(), Quantity: $("#txtBuyQty").val() * 1, // number Entered: new Date() } ajaxCallMethod("SampleService.ashx", "BuyStock", [buyOrder], function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError); }); The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors. Pass POST data instead of Objects In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server. Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data: [CallbackMethod] public string BuyStockPost() { StockBuyOrder buyOrder = new StockBuyOrder(); buyOrder.Symbol = Request.Form["txtBuySymbol"]; ; int qty; int.TryParse(Request.Form["txtBuyQuantity"], out qty); buyOrder.Quantity = qty; DateTime time; DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time); buyOrder.BuyOn = time; // Or easier way yet //FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); var server = new StockServer(); var quote = server.GetStockQuote(buyOrder.Symbol); if (quote == null) throw new ApplicationException("Invalid or missing stock symbol."); return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.", buyOrder.Quantity, quote.Company, quote.Symbol, quote.LastPrice.ToString("c"), (quote.LastPrice * buyOrder.Quantity).ToString("c"), buyOrder.BuyOn.ToString("MMM d")); } Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object. $("#btnBuyStockPost").click(function () { ajaxCallMethod("SampleService.ashx", "BuyStockPost", [], // Note: No parameters - function (result) { $("#divStockBuyMessage").text(result).fadeIn(1000); }, onPageError, // Force all page Form Variables to be posted { postbackMode: "Post" }); }); The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function: { postbackMode: "Post" }); which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set. The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object: FormVariableBinder.Unbind(buyOrder,null,"txtBuy"); which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC. Returning non-JSON Data CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this: [CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")] public string HelloWorldNoJSON(string name) { return "Hello " + name + ". Time is: " + DateTime.Now.ToString(); } Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned. You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data. Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years: [CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")] public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350) { if (width == 0) width = 500; if (height == 0) height = 350; StockServer server = new StockServer(); return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years); } I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code: $("#btnStockQuote").click(function () { var symbol = $("#txtSymbol").val(); ajaxCallMethod("SampleService.ashx", "GetStockQuote", [symbol], function (quote) { $("#divStockDisplay").fadeIn(1000); $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")"); $("#stockLastPrice").text(quote.LastPrice); $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt")); // display a stock chart $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2"); },onPageError); }); The resulting output then looks like this: The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download. The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications. Why reinvent? Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats. While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases: AJAX JSON RPC style callbacks Url based access XML and JSON Output from single method endpoint XML and JSON POST support, querystring input, routing parameter mapping UrlEncoded POST data support on callbacks Ability to return stream/raw string data Essentially ability to return ANYTHING from Service and pass anything All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data. Resources Download the Sample NuGet: Westwind Web and AJAX Utilities (Westwind.Web) ajaxCallMethod() Documentation Using the AjaxMethodCallback WebForms Control West Wind Web Toolkit Home Page West Wind Web Toolkit Source Code © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery  AJAX   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Error loading Mongrel in Aptana Ruby Application on Vista

    - by floatingfrisbee
    I'm brand new at Ruby. Trying to set up the first application/project using Aptana Studio. Here are my ruby and gem versions c:\>ruby -v ruby 1.9.1p378 (2010-01-10 revision 26273) [i386-mingw32] c:\>gem -v 1.3.6 I am seeing this error below while starting my ruby application. I'm developing on Vista (sucks, I know but am working on changing that) C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require': 126: The specified module could not be found. - C:/Ruby/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mingw32/lib/http11.so (LoadError) from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mingw32/lib/mongrel.rb:12:in `<top (required)>' from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler/mongrel.rb:1:in `<top (required)>' from C:/Ruby/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `const_get' from C:/Ruby/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `block in get' from C:/Ruby/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `each' from C:/Ruby/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from C:/Ruby/lib/ruby/gems/1.9.1/gems/rails-2.3.4/lib/commands/server.rb:45:in `<top (required)>' from C:/Users/Me - Admin/My Documents/Aptana RadRails Workspace/EventBuzz/script/server:3:in `require' from C:/Users/Me - Admin/My Documents/Aptana RadRails Workspace/EventBuzz/script/server:3:in `<top (required)>' from -e:2:in `load' from -e:2:in `<main>' As a part of fixing this issue, I've installed the following gems and updates c:\>gem update --system Updating RubyGems Nothing to update c:\>gem install rails capistrano mongrel mongrel_cluster Successfully installed rails-2.3.5 Successfully installed net-ssh-2.0.21 Successfully installed net-sftp-2.0.4 Successfully installed net-scp-1.0.2 Successfully installed net-ssh-gateway-1.0.1 Successfully installed highline-1.5.2 Successfully installed capistrano-2.5.18 Successfully installed mongrel-1.1.5-x86-mingw32 Successfully installed mongrel_cluster-1.0.5 9 gems installed Installing ri documentation for rails-2.3.5... Installing ri documentation for net-ssh-2.0.21... Installing ri documentation for net-sftp-2.0.4... Installing ri documentation for net-scp-1.0.2... Installing ri documentation for net-ssh-gateway-1.0.1... Installing ri documentation for highline-1.5.2... Installing ri documentation for capistrano-2.5.18... Installing ri documentation for mongrel-1.1.5-x86-mingw32... Installing ri documentation for mongrel_cluster-1.0.5... Updating class cache with 1380 classes... Installing RDoc documentation for rails-2.3.5... Installing RDoc documentation for net-ssh-2.0.21... Installing RDoc documentation for net-sftp-2.0.4... Installing RDoc documentation for net-scp-1.0.2... Installing RDoc documentation for net-ssh-gateway-1.0.1... Installing RDoc documentation for highline-1.5.2... Installing RDoc documentation for capistrano-2.5.18... Installing RDoc documentation for mongrel-1.1.5-x86-mingw32... Installing RDoc documentation for mongrel_cluster-1.0.5... c:\>gem install mysql Successfully installed mysql-2.8.1-x86-mingw32 1 gem installed Installing ri documentation for mysql-2.8.1-x86-mingw32... Updating class cache with 1641 classes... Installing RDoc documentation for mysql-2.8.1-x86-mingw32... Ideas as to what is going on?

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >