Search Results

Search found 15578 results on 624 pages for 'place and route'.

Page 95/624 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Can't get bonding and bridging to work for KVM

    - by user9546
    Hi everyone. I can't for the life of me get bonding and bridging to work for the KVM setup I'm building. I'm using a fresh install (not an upgrade) of Ubuntu Server 10.10. I have 4 NICs on the same subnet (two intended for each of my two VMs). I'm trying to achieve the setup that Uthark describes here. But following his guidelines didn't work for me. My eth0 and eth1 did not come up, and "brctl show" showed that br0 didn't have any interfaces (the bond). I assumed it didn't work because he's using 10.4, and this article says there's a recent change in bonding: [I can't post more than one hyperlink per post because I'm a newbie.] I had to use this article to get my interfaces to work at all on the same subnet, which is why I have the post-up lines on some of my interfaces: [I can't post more than one hyperlink per post because I'm a newbie.] I installed ifenslave and ethtool. I also created /etc/modprobe.d/aliases.conf with the following content: alias bond0 bonding options bonding mode=6 miimon=100 downdelay=200 updelay=200 And I included "bonding" in /etc/modules So, after several approaches, here is my latest interfaces file: auto lo iface lo inet loopback auto eth5 iface eth5 inet manual auto br5 iface br5 inet static post-up /sbin/ip rule add from [network].79 lookup 10 post-up /sbin/ip route add table 10 default via [network].1 src [network].79 dev br5 address [network].79 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth5 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth2 iface eth2 inet manual auto br2 iface br2 inet static post-up /sbin/ip rule add from [network].78 lookup 11 post-up /sbin/ip route add table 11 default via [network].1 src [network].78 dev br2 address [network].78 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth2 bridge_stp off bridge_fd 0 bridge_maxwait 0 iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-alb up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto br0 iface br0 inet static address [network].60 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports bond0 eth2, eth5, br2, and br5 all seem to be working fine. The only other thing I could find that looked suspicious is an error regarding bonding in /var/log/messages: kernel: [ 3.828684] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details. even though there is a bond-miimon line in /etc/network/interfaces (if that's what they're talking about). Also, the bond seems to go in and out of promiscuous mode several times on boot: Jan 20 14:19:02 kvmhost kernel: [ 3.902378] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902390] device bond0 left promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902393] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902397] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.998990] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999005] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999008] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999012] device bond0 left promiscuous mode Any advice would be greatly appreciated. It seems that this must be possible, based on other posts, but I can't see what I'm doing wrong. Thanks.

    Read the article

  • Wireless is detected, but not connecting. Ethernet works. How to correct the wireless address?

    - by Lucas
    I am running Ubuntu 14.04 with cable internet, and my wireless is detected and connected, but I cannot connect to the internet. I know the problem is with my machine because other machines are connecting to the same router just fine. I can connect via ethernet just fine as well. Here are some notable tests: ping 192.168.0.105 works with 0% packet loss, but ping 192.168.0.1 has 100% packet loss. When I plug in my ethernet, ping 192.168.0.1 works with 0% packet loss. My wireless name is tg, and the router ip is 192.168.0.1 (where I can enter username and password). I suspect that I need to change my wireless address from 192.168.0.105 to 192.168.0.1. Any suggestions on how to proceed? extra info: [lucas@lucas-ThinkPad-W520]/home/lucas$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"tg" Mode:Managed Frequency:2.462 GHz Access Point: 00:02:6F:83:F8:F4 Bit Rate=1 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=62/70 Signal level=-48 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:52 Invalid misc:166 Missed beacon:0 [lucas@lucas-ThinkPad-W520]/home/lucas$ ifconfig eth0 Link encap:Ethernet HWaddr f0:de:f1:b2:53:53 inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f2de:f1ff:feb2:5353/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:980003 errors:0 dropped:0 overruns:0 frame:0 TX packets:498384 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1320506168 (1.3 GB) TX bytes:59780591 (59.7 MB) Interrupt:20 Memory:f3a00000-f3a20000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:21927 errors:0 dropped:0 overruns:0 frame:0 TX packets:21927 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1781719 (1.7 MB) TX bytes:1781719 (1.7 MB) wlan0 Link encap:Ethernet HWaddr 24:77:03:29:8f:dc inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::2677:3ff:fe29:8fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11828 errors:0 dropped:0 overruns:0 frame:0 TX packets:15444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4855662 (4.8 MB) TX bytes:2250585 (2.2 MB) [lucas@lucas-ThinkPad-W520]/home/lucas$ lspci -nn | grep 0280 03:00.0 Network controller [0280]: Intel Corporation Centrino Ultimate-N 6300 [8086:4238] (rev 3e) [lucas@lucas-ThinkPad-W520]/home/lucas$ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no with ethernet unplugged: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0 with ethernet plugged in: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 [lucas@lucas-ThinkPad-W520]/home/lucas$ nm-tool NetworkManager Tool State: connected (global) - Device: wlan0 [tg] ---------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: connected Default: no HW Address: 24:77:03:29:8F:DC Capabilities: Speed: 52 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) tatum: Infra, 40:8B:07:D8:A5:04, Freq 2437 MHz, Rate 54 Mb/s, Strength 42 W PA WPA2 ums: Infra, 00:20:A6:72:52:BF, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 Alpha 40: Infra, 28:CF:E9:86:59:5D, Freq 5260 MHz, Rate 54 Mb/s, Strength 30 W PA WPA2 thepromiselan: Infra, 58:6D:8F:51:E5:54, Freq 2452 MHz, Rate 54 Mb/s, Strength 34 $ PA WPA2 xfinitywifi: Infra, 06:1D:D5:84:27:A0, Freq 2437 MHz, Rate 54 Mb/s, Strength 52 *tg: Infra, 00:02:6F:83:F8:F4, Freq 2462 MHz, Rate 54 Mb/s, Strength 73 W PA2 ums: Infra, 00:20:A6:A1:9F:25, Freq 2452 MHz, Rate 54 Mb/s, Strength 44 BRIAN-PC_Network:Infra, 20:AA:4B:DD:93:D6, Freq 2462 MHz, Rate 54 Mb/s, Strength 35 W PA2 HOME-C0F8: Infra, 44:32:C8:D2:C0:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 40 W PA WPA2 abcsexy: Infra, 28:28:5D:27:5D:85, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 W PA WPA2 IPv4 Settings: Address: 192.168.0.105 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: e1000e State: connected Default: yes HW Address: F0:DE:F1:B2:53:53 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1

    Read the article

  • Breaking up the Workday– Overcoming the Workaholic Syndrome

    - by dwahlin
    Hi, my name’s Dan Wahlin and I’m a workaholic – I admit it. It’s good from the standpoint that I get a lot done but it also has a lot of cons associated with it as well that I’m not proud of. I literally can’t watch TV without feeling like I should be doing something more productive (although I have no problem going to see movies at a theater or watching sporting events – that’s my escape I guess). On vacation it’s sometimes difficult the first few days to just “let go” of work and enjoy the time with my family. I always feel like I should be checking email and following up with different business projects. Fortunately, my wife knows me really well after 17 years of marriage and “gently” restricts my usage of laptops and other gadgets while we’re out. She also reminds me that constantly burying my face in gadgets just isn’t cool and shows a distinct lack of self control. On a given day I typically put in between 12 (at a minimum) up to 16-18 hours working on projects. My company does .NET consulting (ASP.NET/jQuery, SharePoint and Silverlight) but we also do a lot in the training space so there’s always a client project, some new courseware or some other deliverable that has to be worked on. My normal process for handling that is to just work my butt off and see how much I can get done. That process has worked well for a long time but when you start realizing that your happiness comes from how much work you accomplished that day then you have a problem. That’s especially true if you have kids (which I do….two awesome boys). It’s almost as if working more hours feels like I’m more successful or something which is of course ridiculous. It may actually mean that I’m too distracted or disorganized. Lately I’ve realized that while I’m still productive and always meet my deadlines, I’m really burnt out by the afternoon and have lost some of the excitement I used to have. Part of that’s normal I think given that I’ve been doing this for close to 15 years now, but in thinking through it more I realized that I just need to get away from the desk and take a break. By far, the happiest time of my life was my childhood. Part of that was due to having awesome parents, having far less responsibility (a big factor I suspect), being able to hang-out with friends at school, playing sports, games, etc. but I think a big part of the overall happiness came from being outside a lot. I lived on my bike as a little kid and as I grew up I shared time between riding an ATV all over the place, shooting hoops on the basketball court, playing golf and working on a golf course (all outside work of course).  Being a software developer and trainer I generally spend 95% or more of my day indoors and only see the sun when driving from place to place or by looking out the window (that’s sad because I live in a suburb of Phoenix, AZ where it’s nearly always sunny). I haven’t looked into any scientific studies on the matter, but I’d be willing to bet there’s a direct correlation between overall productivity/happiness and being outside some throughout the day (sunny or not). But, I wasn’t sure what to do about it since I do have a lot of deadlines I need to meet after all. While talking with my wife last night I mentioned how I feel like I’m in a rut and want to get the “fun” back that I used to have. She immediately said that I need to start making time for breaks (a real quick fact – she’s a lot smarter than me and nearly always right). Of course my first thought was that I’d be less productive taking breaks. If I spend 2 hours just relaxing then I’m losing 2 hours of work. But, I thought about it more and realized that I’m probably less productive when I work 10+ hours and only take less than 30 minutes for a lunch break to relax a little. I bet my brain is screaming, “Please let me relax a little so I can figure out these problems you’re trying to resolve!”. So, starting today I’m going to try to break the workaholic habit and spend time outside of the office. That could mean sitting around outside, working out, golfing, or whatever. I’ve decided that no gadgets are allowed during that time and that I shouldn’t work for more than 4 hours straight without taking a break. I have no idea how my little “break the workaholic syndrome” experiment will go or how long it will last, but I’d be very interested in hearing from others on how they keep fresh and focused without working yourself to death. If you have any specific ideas, techniques or practices you follow please share them. There’s a lot more to life than work and some of us (and I’m thinking of myself specifically) need to take a long, hard look at what kind of balance we currently have. I’d hate to look back at my life when I’m 80 years old and say, “The only thing I did was work – I missed out on life!”.

    Read the article

  • MVC Portable Areas &ndash; Static Files as Embedded Resources

    - by Steve Michelotti
    This is the third post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources In the last post, I walked through a convention for managing static files.  In this post I’ll discuss another approach to manage static files (e.g., images, css, js, etc.).  With this approach, you *also* compile the static files as embedded resources into the assembly similar to the *.aspx pages. Once again, you can set this to happen automatically by simply modifying your *.csproj file to include the desired extensions so you don’t have to remember every time you add a file: 1: <Target Name="BeforeBuild"> 2: <ItemGroup> 3: <EmbeddedResource Include="**\*.aspx;**\*.ascx;**\*.gif;**\*.css;**\*.js" /> 4: </ItemGroup> 5: </Target> We now need a reliable way to serve up these static files that are embedded in the assembly. There are a couple of ways to do this but one way is to simply create a Resource controller whose job is dedicated to doing this. 1: public class ResourceController : Controller 2: { 3: public ActionResult Index(string resourceName) 4: { 5: var contentType = GetContentType(resourceName); 6: var resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName); 7:   8: return this.File(resourceStream, contentType); 9: return View(); 10: } 11:   12: private static string GetContentType(string resourceName) 13: { 14: var extention = resourceName.Substring(resourceName.LastIndexOf('.')).ToLower(); 15: switch (extention) 16: { 17: case ".gif": 18: return "image/gif"; 19: case ".js": 20: return "text/javascript"; 21: case ".css": 22: return "text/css"; 23: default: 24: return "text/html"; 25: } 26: } 27: } In order to use this controller, we need to make sure we’ve registered the route in our portable area registration (shown in lines 5-6): 1: public class WidgetAreaRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "widget1/resource/{resourceName}", 6: new { controller = "Resource", action = "Index" }); 7:   8: context.MapRoute("Widget1", "widget1/{controller}/{action}", new 9: { 10: controller = "Home", 11: action = "Index" 12: }); 13:   14: RegisterTheViewsInTheEmbeddedViewEngine(GetType()); 15: } 16:   17: public override string AreaName 18: { 19: get { return "Widget1"; } 20: } 21: } In my previous post, we relied on a custom Url helper method to find the actual physical path to the static file like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! However, since we are now embedding the files inside the assembly, we no longer have to worry about the physical path. We can change this line of code to this: 1: <img src="<%: Url.Resource("Widget1.images.arrow.gif") %>" /> Hello World! Note that I had to fully quality the resource name (with namespace and physical location) since that is how .NET assemblies store embedded resources. I also created my own Url helper method called Resource which looks like this: 1: public static string Resource(this UrlHelper urlHelper, string resourceName) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: return urlHelper.Action("Index", "Resource", new { resourceName = resourceName, area = areaName }); 5: } This method gives us the convenience of not having to know how to construct the URL – but just allowing us to refer to the resource name. The resulting html for the image tag is: 1: <img src="/widget1/resource/Widget1.images.arrow.gif" /> so we can always request any image from the browser directly. This is almost analogous to the WebResource.axd file but for MVC. What is interesting though is that we can encapsulate each one of these so that each area can have it’s own set of resources and they are easily distinguished because the area name is the first segment of the route. This makes me wonder if something like this ResourceController should be baked into portable areas itself. I’m definitely interested in anyone has any opinions on it or have taken alternative approaches.

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Soft lockup after upgrade - cannot install from live CD

    - by nbm
    I dual-boot MacIntel Core 2 duo. nVidia graphics. Ran upgrade from ubuntu 13.10 to 14.04 (64 bit). On restart ran into {numbers} Bug: soft lockup - CPU#0 stuck for 22s! [swapper/0:1] Tried loading earlier kernel: same problem Tried re-installing ubuntu from a liveCD that has worked in the past: version 13.04. Same problem. Tried re-partitioning hard drive using Mac OS X disk utility and then installing ubuntu 14.04LTS from liveCD. Same problem. Not possible to verify liveCD disk (creates same "soft lockup" bug.) Tried installing from the liveCD with version 13.04 that I know works (that's how I got Ubuntu on this machine in the first place.) Same problem. I know this is not a hardware problem as OS X works just fine, I am using it right now on the same machine. I have been using various versions of Ubuntu for 2 years. Things I cannot do: Open a terminal Verify CD image Start ubuntu from CD (same soft lockup problem) This problem is similar to some other questions, none of which have been satisfactorily answered: Ubuntu 14.04 soft lockup on Vostro 3500 Cannot do fresh install of Ubuntu 13.04 while booting from DVD: "soft lockup" bug Live CD stalls when installing Ubuntu 13.10 UPDATE 6/11/14: Following some much-appreciated advice from bain (see below) I burned a 12.04LTS disk and started with kernel parameters: noapic, no1apic, acpi=off, nomodeset, elevator=deadline, and clocksource=jiffies. With all of these parameters I was able to load the 12.04LTS CD ("Try without installing"). It worked fine. However, as soon as I tried to install Ubuntu from the CD, my wired ethernet (eth0) connection would hang. There are already various askubuntu questions and bug reports about this problem, none of which had answers for me. (E.g., dhclient eth0 does nothing, none of the various reset commands does anything, manually setting IP &etc does nothing. I could reliably kill the ethernet connection by clicking "install ubuntu" every single time.) I could go ahead and install 12.04 without an internet connection, but the install would freeze after mostly completing (I tried several times.) There were some relevant error messages in the details of the install output script that, IIRC, had to do with searching for missing files and not being able to access eth0 (internet) to get them. To be honest I gave up at that point and I'm not sure I wrote those down. If I find some notes I will post them. At this point I no longer have Ubuntu on my system. I wiped the partitions and am using exclusively OS X. I am leaving this question in case it helps anyone else with similar problems. I love open source and I love Linux, and the next machine I get I will probably just build from Arch. At the moment I miss repositories and a lot of other things about Ubuntu, but the OS X terminal is 'nix, I can pretty much use all the open source apps I like, and while I am not a fan of the Apple software it gets the job done for me. Unlike Ubuntu, which can't even install. I realize this isn't necessarily a place for a soapbox speech, but when I first installed 12.04 several years ago there were already people in the community complaining that Canonical was going too "commercial". But I loved it. Several years later and all I've seen is Canonical adding more not-so-useful bells and whistles to Ubuntu while continually failing to fix basic problems on upgrades. With a dual-boot (and sometimes triple-boot) system it always took me some tweaking to get an upgrade to work, and to some extent that is okay. But at this point I feel like Canonical ought to just put a price tag on Ubuntu. All I see is more commercialism and advertising and product tie-ins, and ongoing problems do not get fixed. I am a big fan of open-source, not-for profit enterprise. I am also a big fan of for-profit enterprise, which certainly has its place and usefulness. I am not a fan of companies who pretend to be in favor of open source but really are just out to make a buck, and IMNSHO that is what Canonical has become. This is a great community and I wish you all the best, but my next install of Linux will not be Ubuntu.

    Read the article

  • BeansBinding Across Modules in a NetBeans Platform Application

    - by Geertjan
    Here's two TopComponents, each in a different NetBeans module. Let's use BeansBinding to synchronize the JTextField in TC2TopComponent with the data published by TC1TopComponent and received in TC2TopComponent by listening to the Lookup. The key to getting to the solution is to have the following in TC2TopComponent, which implements LookupListener: private BindingGroup bindingGroup = null; private AutoBinding binding = null; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && binding != null) { bindingGroup.getBinding("customerNameBinding").unbind(); } if (!result.allInstances().isEmpty()){ Customer c = result.allInstances().iterator().next(); // put the customer into the lookup of this topcomponent, // so that it will remain in the lookup when focus changes // to this topcomponent: ic.set(Collections.singleton(c), null); bindingGroup = new BindingGroup(); binding = Bindings.createAutoBinding( // a two-way binding, i.e., a change in // one will cause a change in the other: AutoBinding.UpdateStrategy.READ_WRITE, // source: c, BeanProperty.create("name"), // target: jTextField1, BeanProperty.create("text"), // binding name: "customerNameBinding"); bindingGroup.addBinding(binding); bindingGroup.bind(); } } I must say that this solution is preferable over what I've been doing prior to getting to this solution: I would get the customer from the resultChanged, set a class-level field to that customer, add a document listener (or action listener, which is invoked when Enter is pressed) on the text field and, when a change is detected, set the new value on the customer. All that is not needed with the above bit of code. Then, in the node, make sure to use canRename, setName, and getDisplayName, so that when the user presses F2 on a node, the display name can be changed. In other words, when the user types something different in the node display name after pressing F2, the underlying customer name is changed, which happens, in the first place, because the customer name is bound to the text field's value, so that the text field's value will also change once enter is pressed on the changed node display name. Also set a PropertyChangeListener on the node (which implies you need to add property change support to the customer object), so that when the customer object changes (which happens, in the second place, via a change in the value of the text field, as defined in the binding defined above), the node display name is updated. In other words, there's still a bit of plumbing you need to include. But less than before and the nasty class-level field for storing the customer in the TC2TopComponent is no longer needed. And a listener on the text field, with a property change listener implented on the TC2TopComponent, isn't needed either. On the other hand, it's more code than I was using before and I've had to include the BeansBinding JAR, which adds a bit of overhead to my application, without much additional functionality over what I was doing originally. I'd lean towards not doing things this way. Seems quite expensive for essentially replacing a listener on a text field and a property change listener implemented on the TC2TopComponent for being notified of changes to the customer so that the text field can be updated. On the other other hand, it's kind of nice that all this listening-related code is centralized in one place now. So, here's a nice improvement over the above. Instead of listening for a customer, listen for a node, from which the customer can be obtained. Then, bind the node display name to the text field's value, so that when the user types in the text field, the node display name is updated. That saves you from having to listen in the node for changes to the customer's name. In addition to that binding, keep the previous binding, because the previous binding connects the customer name to the text field, so that when the customer display name is changed via F2 on the node, the text field will be updated. private BindingGroup bindingGroup = null; private AutoBinding nodeUpdateBinding; private AutoBinding textFieldUpdateBinding; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && textFieldUpdateBinding != null) { bindingGroup.getBinding("textFieldUpdateBinding").unbind(); } if (bindingGroup != null && nodeUpdateBinding != null) { bindingGroup.getBinding("nodeUpdateBinding").unbind(); } if (!result.allInstances().isEmpty()) { Node n = result.allInstances().iterator().next(); Customer c = n.getLookup().lookup(Customer.class); ic.set(Collections.singleton(n), null); bindingGroup = new BindingGroup(); nodeUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, n, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "nodeUpdateBinding"); bindingGroup.addBinding(nodeUpdateBinding); textFieldUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, c, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "textFieldUpdateBinding"); bindingGroup.addBinding(textFieldUpdateBinding); bindingGroup.bind(); } } Now my node has no property change listener, while the customer has no property change support. As in the first bit of code, the text field doesn't have a listener either. All that listening is taken care of by the BeansBinding code.  Thanks to Toni for help with this, though he can't be blamed for anything that is wrong with it, only thanked for anything that is right with it. 

    Read the article

  • ASP.NET MVC 3: RouteExistingFiles = true seems to have no effect

    - by Oleksandr Kaplun
    I'm trying to understand how RouteExistingFiles works. So I've created a new MVC 3 internet project (MVC 4 behaves the same way) and put a HTMLPage.html file to the Content folder of my project. Now I went to the Global.Asax file and edited the RegisterRoutes function so it looks like this: public static void RegisterRoutes(RouteCollection routes) { routes.RouteExistingFiles = true; //Look for routes before looking if a static file exists routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new {controller = "Home", action = "Index", id = UrlParameter.Optional} // Parameter defaults ); } Now it should give me an error when I'm requesting a localhost:XXXX/Content/HTMLPage.html since there's no "Content" controller and the request definitely hits the default pattern. But instead I'm seeing my HTMLPage. What am I doing wrong here? Update: I think I'll have to give up. Even if I'm adding a route like this one: routes.MapRoute("", "Content/{*anything}", new {controller = "Home", action = "Index"}); it still shows me the content of the HTMLPage. When I request a url like ~/Content/HTMLPage I'm getting the Index page as expected, but when I add a file extenstion like .html or .txt the content is shown (or a 404 error if the file does not exist). If anyone can check this in VS2012 please let me know what result you're getting. Thank you.

    Read the article

  • ASP.NET MVC Routing - Redirect to aspx?

    - by bmoeskau
    This seems like it should be easy, but for some reason I'm having no luck. I'm migrating an existing WebForms app to MVC, so I need to keep the root of the site pointing to my existing aspx pages for now and only apply routing to named routes. Here's what I have: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); RouteTable.Routes.Add( "Root", new Route("", new DefaultRouteHandler()) ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Calendar2", action = "Index", id = "" } // Parameter defaults ); } So aspx pages should be ignored, and the default root url should be handled by this handler: public class DefaultRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { return System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath( "~/Dashboard/default.aspx", typeof(Page)) as IHttpHandler; } } This seems to work OK, but the resulting YPOD gives me this: Multiple controls with the same ID '__Page' were found. Trace requires that controls have unique IDs. which seems to imply that the page is somehow getting rendered twice. If I simply type in the url to my dashboard page directly it works fine (no routing, no error). I have no idea why the handler code would be doing anything differently. Bottom line -- I'd like to simply redirect the root url path to an aspx of my choosing -- can anyone shed some light?

    Read the article

  • Passenger, Nginx, and Capistrano - Passenger not launching Rails app at all

    - by Throlkim
    Essentially, my route is working perfectly, Passenger seems to be loading - all is hunky-dory. Except that nothing Railsy happens. Here's my Nginx log from starting the server to the first request (ignore the different domain/route - it's because I haven't moved the new domain over yet, and it's returning a 403 error because there's no index file in the public folder): [ pid=24559 file=ext/nginx/HelperServer.cpp:826 time=2009-11-10 00:49:13.227 ]: Passenger helper server started on PID 24559 [ pid=24559 file=ext/nginx/HelperServer.cpp:831 time=2009-11-10 00:49:13.227 ]: Password received. 2009/11/10 00:49:53 [error] 24578#0: *1 directory index of "/var/www/***/current/public/" is forbidden, client: 188.221.195.27, server: ***, request: "GET / HTTP/1.1", host: "***" 2009/11/10 00:49:54 [error] 24578#0: *1 open() "/var/www/***/current/public/favicon.ico" failed (2: No such file or directory), client: 188.221.195.27, server: ***, request: "GET /favicon.ico HTTP/1.1", host: "***", referrer: "***" Someone on the RubyOnRails IRC channel suggested that it might be a webserver permissions problem. I had a suspicion that it might be a filesystem permission problem, but then Nginx runs as www-data and Passenger as root. I can load static files from within the public directory fine, but no Rails application is being launched. Does anyone have an idea? My head is gradually melting away figuring this one out! Edit: Here's the vhost file: server { listen 80; server_name ***; passenger_enabled on; location / { root /var/www/***/current/public; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }

    Read the article

  • MVC controller is being called twice

    - by rboarman
    Hello, I have a controller that is being called twice from an ActionLink call. My home page has a link, that when clicked calls the Index method on the Play controller. An id of 100 is passed into the method. I think this is what is causing the issue. More on this below. Here are some code snippets: Home page: <%= Html.ActionLink(“Click Me”, "Index", "Play", new { id = 100 }, null) %> Play Controller: public ActionResult Index(int? id) { var settings = new Dictionary<string, string>(); settings.Add("Id", id.ToString()); ViewData["InitParams"] = settings.ToInitParams(); return View(); } Play view: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" %> (html <head> omitted for brevity) <body> <form id="form1" runat="server" style="height:100%"> Hello </form> </body> If I get rid of the parameter to the Index method, everything is fine. If I leave the parameter in place, then the Index method is called with 100 as the id. After returning the View, the method is called a second time with a parameter of null. I can’t seem to figure out what is triggering the second call. My first thought was to add a specific route like this: routes.MapRoute( "Play", // Route name "Play/{id}", // URL with parameters new {controller = "Play", action = "Index"} // Parameter defaults ); This had no effect other than making a prettier looking link. I am not sure where to go from here. Thank you in advance. Rick

    Read the article

  • Creating ViewResults outside of Controllers in ASP.NET MVC

    - by Craig Walker
    Several of my controller actions have a standard set of failure-handling behavior. In general, I want to: Load an object based on the Route Data (IDs and the like) If the Route Data does not point to a valid object (ex: through URL hacking) then inform the user of the problem and return an HTTP 404 Not Found Validate that the current user has the proper permissions on the object If the user doesn't have permission, inform the user of the problem and return an HTTP 403 Forbidden If the above is successful, then do something with that object that's action-specific (ie: render it in a view). These steps are so standardized that I want to have reusable code to implement the behavior. My current plan of attack was to have a helper method to do something like this: public static ActionResult HandleMyObject(this Controller controller, Func<MyObject,ActionResult> onSuccess) { var myObject = MyObject.LoadFrom(controller.RouteData). if ( myObject == null ) return NotFound(controller); if ( myObject.IsNotAllowed(controller.User)) return NotAllowed(controller); return onSuccess(myObject); } # NotAllowed() is pretty much the same as this public static NotFound(Controller controller){ controller.HttpContext.Response.StatusCode = 404 # NotFound.aspx is a shared view. ViewResult result = controller.View("NotFound"); return result; } The problem here is that Controller.View() is a protected method and so is not accessible from a helper. I've looked at creating a new ViewResult instance explicitly, but there's enough properties to set that I'm wary about doing so without knowing the pitfalls first. What's the best way to create a ViewResult from outside a particular Controller?

    Read the article

  • Creating an Ajax.ActionLink that avoids all caching issues

    - by Richard Ev
    I am using an Ajax.ActionLink to display a partial view that shows a settings dialog (the modality of which is arranged using jQuery UI dialog). The issue I am running into is around browser caching. It is important that the user is never shown a cached settings dialog. In an attempt to achieve this I have written the following extension method that has the same method signature as the ActionLink method overload that I am using. /// <summary> /// Defines an AJAX ActionLink that effectively bypasses browser caching issues /// by adding an additional route value that contains a unique (actually DateTime.Now.Ticks) value. /// </summary> public static MvcHtmlString NonCachingActionLink(this AjaxHelper helper, string linkText, string actionName, string controllerName, System.Web.Routing.RouteValueDictionary routeValues, AjaxOptions ajaxOptions) { routeValues.Add("rnd", DateTime.Now.Ticks); return helper.ActionLink(linkText, actionName, controllerName, routeValues, ajaxOptions); } This works well between browser sessions (as the rnd route value gets re-calculated on page load), but not if the user is on the page, makes settings changes, saves them (which is done with another ajax call) and then re-displays the settings dialog. My next step is to look into creating my own ActionLink equivalent that re-calculates a random query string component as part of the onclick JavaScript event handler. Thoughts please.

    Read the article

  • Focussing on Style Sheets and Cross Browser Compatibility.

    - by Sam
    Hello everyone, Let me begin this topic by explaining my background experience with web design. I have always been more of a back end programmer, with PHP and SQL and things. However I do have a shallow background with HTML and CSS. The problem is, I don't know it all. What I do know is, when it comes to designing (not back end dirty work) I understand basic CSS properties and I also understand HTML and I can usually throw together a sloppy web page with the two and a couple bazillion DIV tags. Anyways.. The problem I always have encountered is that when I design a website in a browser such as IE7 (and then it looks perfect on IE7), and then look at it on IE8 or IE6 or Mozilla (etc.) it gets all spacey and ugly and looks totally different than the way it should look on IE7. Question one: Basically, what I am asking everyone is what route should I take to learn how to properly build the website? Build as in put it togehter with CSS standards and HTML standards that will make my site look the same on every brwoser. (Not only learning standards but where can I learn to properly write my code?) Where is a strong free resource I can use to learn how to these things? Question two: How do I properly code my website? Do I use all external style sheets to make dynamic page design simplistic or do I hard code some things into the DIV tags on each page? What is proper? Oh, and if anyone has any tutorials on how to properly design a complete layout feel free to throw it in a response somewhere. Thank you for taking the time to read my questions, and hopefully you will understand what I am trying to get out to everyone. I need to get on the right route of the designing side of web programming so that I will know how to create successful websites in the future. Thank you, Sam Pardee

    Read the article

  • Kohana multi language website

    - by Sobek
    .I'm trying to set up a multi language website with kohana v3, following this tutorial: http://kerkness.ca/wiki/doku.php?id=example_of_a_multi-language_website Routing to a controller or action within i.e. website/controller/action seems to work as the url is properly redirected to website/lang/controller/action. However this is not working for ajax request calls. I have to manually edit the url with the appropriate language, to successfully retrieve the data. This also applies for anchors on the html page. In addition to this problem, the overflow parameter 'id' also doesn't work. It takes the 'lang' variable as its parameter. I have setup my default route just like in the tutorial i.e.: Route::set('default', '((<lang>)(/)(<controller>)(/<action>(/<id>)))', array('lang' => "({$langs_abr})",'id'=>'.+')) ->defaults(array('lang' => $default_lang,'controller' => welcome', 'action' => 'index')); Any help is much appreciated ! Cheers

    Read the article

  • Passing a string in Html.ActionLink in MVC 2

    - by 109221793
    I have completed the MVC Music Store application, and now am making my own modifications for practice, and also to prepare me to do similar tasks in my job. What I have done, is create a view that displays the current list of users. I have done this as I want to be able to change the passwords of users should they forget them. Here is a snippet from my view: <% foreach (MembershipUser user in Model) { %> <tr> <td><%: Html.ActionLink(user.UserName, "changeUserPassword", "StoreManager", new { username = user.UserName }) %></td> <td><%: user.LastActivityDate %></td> <td><%: user.IsLockedOut %></td> </tr> <% }%> What I want to know, is it possible to pass the username through the actionlink as I have done above. I have registered the following route in the global.asax.cs file, however I am unsure if I have done it correctly: routes.MapRoute( "AdminPassword", //Route name "{controller}/{action}/{username}", //URL with parameters new { controller = "StoreManager", action = "changeUserPassword", username = UrlParameter.Optional }); Here is my GET action: public ActionResult changeUserPassword(string username) { ViewData["username"] = username; return View(); } I have debugged through the code to find that ViewData["username"] = username doesn't populate so it looks like either I have not registered the routes properly, or I simply cannot pass the username using the actionlink like I have. I'd be grateful if someone could point me in the right direction.

    Read the article

  • ASP.NET MVC: Html.Actionlink() generates empty link.

    - by wh0emPah
    Okay i'm experiencing some problems with the actionlink htmlhelper. I have some complicated routing as follows: routes.MapRoute("Groep_Dashboard_Route", // Route name "{EventName}/{GroupID}/Dashboard", // url with Paramters new {controller = "Group", action="Dashboard"}); routes.MapRoute("Event_Groep_Route", // Route name "{EventName}/{GroupID}/{controller}/{action}/{id}", new {controller = "Home", action = "Index"}); My problem is generating action links that match these patterns. The eventname parameter is really just for having a user friendly link. it doesn't do anything. Now when i'm trying for example to generate a link. that shows the dashboard of a certain groep. Like: mysite.com/testevent/20/Dashboard I'll use the following actionlink: <%: Html.ActionLink("Show dashboard", "Group", "Dashboard", new { EventName_Url = "test", GroepID = item.groepID}, null)%> What my actual result in html gives is: <a href="">Show Dashboard</a> Please bear with me i'm still new at ASP MVC. Could someone tell me what i'm doing wrong? Help would be appreciated!

    Read the article

  • Reloading Sinatra app on every request on Windows

    - by Darth
    I've set up Rack::Reload according to this thread # config.ru require 'rubygems' require 'sinatra' set :environment, :development require 'app' run Sinatra::Application # app.rb class Sinatra::Reloader < Rack::Reloader def safe_load(file, mtime, stderr = $stderr) if file == Sinatra::Application.app_file ::Sinatra::Application.reset! stderr.puts "#{self.class}: reseting routes" end super end end configure(:development) { use Sinatra::Reloader } get '/' do 'foo' end Running with thin via thin start -R config.ru, but it only reloads newly added routes. When I change already existing route, it still runs the old code. When I add new route, it correctly reloads it, so it is accessible, but it doesn't reload anything else. For example, if I changed routes to get '/' do 'bar' end get '/foo' do 'baz' end Than / would still serve foo, even though it has changed, but /foo would correctly reload and serve baz. Is this normal behavior, or am I missing something? I'd expect whole source file to be reloaded. The only way around I can think of right now is restarting whole webserver when filesystem changes. I'm running on Windows Vista x64, so I can't use shotgun because of fork().

    Read the article

  • Installing Grails with Apache Camel plugin

    - by Abdullah Jibaly
    I'm having trouble getting the Apache Camel plugin to run in grails-1.1.1. Here's the steps I took: $ grails create-app camelapp Welcome to Grails 1.1.1 - http://grails.org/ ... $ cd camelapp $ grails run-app ... Running Grails application.. Server running. Browse to http://localhost:8080/camelapp $ grails install-plugin camel ... Camel Route directory was created. Plugin camel-0.2 installed Plug-in provides the following new scripts: ------------------------------------------ grails create-route $ grails run-app ... [groovyc] org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, Compile error during compilation with javac. [groovyc] /Users/abdullah/.grails/1.1.1/projects/camelapp/plugins/camel-0.2/src/java/org/ix/grails/plugins/camel/ClosureProcessor.java:22: method does not override a method from its superclass [groovyc] @Override [groovyc] ^ ... : Compilation Failed at org.codehaus.groovy.ant.Groovyc.compile(Groovyc.java:807) at org.codehaus.groovy.ant.Groovyc.execute(Groovyc.java:540) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at _GrailsCompile_groovy$_run_closure3_closure7.doCall(_GrailsCompile_groovy:102) at _GrailsCompile_groovy$_run_closure3_closure7.doCall(_GrailsCompile_groovy) at _GrailsSettings_groovy$_run_closure10.doCall(_GrailsSettings_groovy:274) at _GrailsSettings_groovy$_run_closure10.call(_GrailsSettings_groovy) at _GrailsCompile_groovy$_run_closure3.doCall(_GrailsCompile_groovy:89) at _GrailsCompile_groovy$_run_closure2.doCall(_GrailsCompile_groovy:55) at _GrailsPackage_groovy$_run_closure2_closure9.doCall(_GrailsPackage_groovy:79) at _GrailsPackage_groovy$_run_closure2_closure9.doCall(_GrailsPackage_groovy) at _GrailsSettings_groovy$_run_closure10.doCall(_GrailsSettings_groovy:274) at _GrailsSettings_groovy$_run_closure10.call(_GrailsSettings_groovy) at _GrailsPackage_groovy$_run_closure2.doCall(_GrailsPackage_groovy:78) at RunApp$_run_closure1.doCall(RunApp.groovy:28) at gant.Gant$_dispatch_closure4.doCall(Gant.groovy:324) at gant.Gant$_dispatch_closure6.doCall(Gant.groovy:334) at gant.Gant$_dispatch_closure6.doCall(Gant.groovy) at gant.Gant.withBuildListeners(Gant.groovy:344) at gant.Gant.this$2$withBuildListeners(Gant.groovy) at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source) at gant.Gant.dispatch(Gant.groovy:334) at gant.Gant.this$2$dispatch(Gant.groovy) at gant.Gant.invokeMethod(Gant.groovy) at gant.Gant.processTargets(Gant.groovy:495) at gant.Gant.processTargets(Gant.groovy:480) Caused by: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, Compile error during compilation with javac. /Users/abdullah/.grails/1.1.1/projects/camelapp/plugins/camel-0.2/src/java/org/ix/grails/plugins/camel/ClosureProcessor.java:22: method does not override a method from its superclass @Override ^ ... Compilation error: Compilation Failed $ java -version java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06-153) Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_07-b06-57, mixed mode)

    Read the article

  • WCF, ASMX Basic HTTP binding and IIS

    - by Brennan Mann
    Hello, I have been doing a lot of work with WCF "self" hosted applications. I recently was requested to write a web service where the calling client was a Linux based program named "WGET". I would like to use WCF instead of a traditional ASMX web service. The web service is returning a standard XML response. I am not sure of the underlining details between the two technologies but I know WCF is the proper route. I created a WCF service to be hosted in IIS ( using basicHttpBinding). 1.) Did classic ASMX web services ( standard HTTP POST/GET) use SOAP to return responses? I created an class from XSD for the web service response. What is really going on behind the scenes? Is there just special XML HTTP headers that know how to handle to response? Is the response not wrapped in SOAP? The traditional ASMX web service worked perfectly with the class I generated using the .Net "XSD" program. 2.) I want to use WCF for this service. Will using basicHttpBinding work? As I have read, that is the correct binding to use for ASMX clients. Does this use SOAP, standard HTTP headers, or something else? 3.) This is a dumb question because I have not done a lot of web service programming. I noticed on the ASMX default landing page there were examples for responses and code to invoke the functionality. When I create the same service using WCF, I had to create a client application to perform these tasks. Is there a way to expose the WCF endpoint like a classic ASMX service or is the WSDL the only route? As always, I really appreciate the feedback. Thanks, Brennan

    Read the article

  • When I use a form to call a method on the controller, I want the page to refresh, and the url to sho

    - by MedicineMan
    Using ASP MVC, I have set up a webpage for localhost/Dinner/100 to show the dinner details for dinner with ID = 100. On the page, there is a dropdown that shows Dinner 1, Dinner 2, etc. The user should select the dinner of interest (Dinner 2, ID = 102) off the form and press submit. The page should refresh and show the url: localhost/Dinner/102, and show the details of dinner 2. My code is working except for the url. During this, my url shows localhost/Dinner/100 even though it is correctly displaying the details of Dinner 2 (ID = 102). My controller method is pretty simple: public ActionResult Index(string id) { int Id = 0; if (!IsValidFacilityId(id) || !int.TryParse(id, out Id)) { return Redirect("/"); } return View(CreateViewModel(Id)); } can you help me figure out how to get this all working? p.s. I did create a custom route for the method: routes.MapRoute( "DinnerDefault", // Route name "Dinner/{id}", // URL with parameters new { controller = "Dinner", action = "Index", id = "" } // Parameter defaults );

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >