Search Results

Search found 12038 results on 482 pages for 'controller actions'.

Page 118/482 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Ubuntu 13.10, kernel 3.11 blank screen issue with hybrid graphics

    - by Lagerbaer
    On my HP Envy, which has both an Intel on-chip graphics card and an Nvidia Geforce: *-display UNCLAIMED description: 3D controller product: GK208M [GeForce GT 740M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: cap_list configuration: latency=0 resources: memory:d2000000-d2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:5000(size=128) memory:b2000000-b207ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:d3000000-d33fffff memory:c0000000-cfffffff ioport:6000(size=64) I have trouble with all newer kernels. I basically had to install 12.04 LTS and use their 3.5 kernel family to get the system to boot. The 3.8 from 12.10 or the newest 3.11 from Ubuntu 13.10 leave me with a black screen upon boot. On one occasion I did hear the "log in" sound, but the screen did not display anything. I have purged all nvidia drivers so I guess it should just use the intel drivers, but apparently this is all messed up with newer kernel versions. This is different from the other "nvidia boots into blank screen" bug in that I don't rely solely on an nvidia card. Surely the intel on-chip card should be supported and leave me with something different from a blank screen? Again, it only works with kernel versions 3.5.0-41-generic, not with the 3.11.0-12 one that ships with Ubuntu 13.10. When I go into the grub menu and change the boot options from 'quiet splash' to 'nomodeset' I am able to boot the system, but then I don't get any graphics and trying 'sudo service lightdm start' doesn't succeed (I get 100% CPU for apport, but this doesn't do anything either, so I kill it). Help, I'm all out of ideas. EDIT: Let me add that I'm using the EFI boot system and have a dual-boot installation with Windows 8.

    Read the article

  • Wireless keeps disabling or stays disconnected (Realtek RTL8191SEvB)

    - by jindrichm
    I have Realtek RTL8191SEvB wireless card on Ubuntu 10.10: $ lspci -v | grep Network 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8191SEvB Wireless LAN Controller (rev 10) When I load its driver, according to the Network Manager it sometimes blinks with a list of available networks but it keeps disabling itself or it stays disconnected. So, I can't connect to any wi-fi network (which results in frustration). The driver is loaded: $ lsmod Module Size Used by r8192se_pci 509932 0 Looks normal: $ sudo lshw -C network *-network description: Wireless interface product: RTL8191SEvB Wireless LAN Controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 10 serial: 1c:65:9d:60:c7:7a width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl819xSE driverversion=0019.1207.2010 firmware=63 latency=0 link=no multicast=yes wireless=802.11bgn resources: irq:17 ioport:2000(size=256) memory:f0500000-f0503fff Configured: $ sudo iwconfig wlan0 wlan0 802.11bgn Nickname:"rtl8191SEVA2" Mode:Managed Frequency=2.412 GHz Access Point: Not-Associated Bit Rate:130 Mb/s Retry:on RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=10/100 Signal level=0 dBm Noise level=-100 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Is not blocked: $ rfkill list all 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: yes However something's happening with it: $ dmesg [ 6485.948668] InitializeAdapter8190(): ==++==> Turn off RF for RfOffReason(1073741824) ---------- [ 6486.062666] rtl8192_SetWirelessMode(), wireless_mode:10, bEnableHT = 1 [ 6486.062671] InitializeAdapter8192SE(): Set MRC settings on as default!! [ 6486.062675] HW_VAR_MRC: Turn on 1T1R MRC! [ 6486.064091] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 6486.248761] rtl8192_SetWirelessMode(), wireless_mode:10, bEnableHT = 1 [ 6486.248771] InitializeAdapter8192SE(): Set MRC settings on as default!! [ 6486.248776] HW_VAR_MRC: Turn on 1T1R MRC! [ 6486.580083] GPIOChangeRF - HW Radio OFF [ 6486.610085] ============>sync_scan_hurryup out [ 6486.623814] ================>r8192_wx_set_scan(): hwradio off [ 6486.830484] =========>r8192_wx_set_essid():hw radio off,or Rf state is eRfOff, return So, does anyone know where the problem might be?

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • Do you write unit tests for all the time in TDD?

    - by mcaaltuntas
    I have been designing and developing code with TDD style for a long time. What disturbs me about TDD is writing tests for code that does not contain any business logic or interesting behaviour. I know TDD is a design activity more than testing but sometimes I feel it's useless to write tests in these scenarios. For example I have a simple scenario like "When user clicks check button, it should check file's validity". For this scenario I usually start writing tests for presenter/controller class like the one below. @Test public void when_user_clicks_check_it_should_check_selected_file_validity(){ MediaService service =mock(MediaService); View view =mock(View); when(view.getSelectedFile).thenReturns("c:\\Dir\\file.avi"); MediaController controller =new MediaController(service,view); controller.check(); verify(service).check("c:\\Dir\\file.avi"); } As you can see there is no design decision or interesting code to verify behaviour. I am testing values from view passed to MediaService. I usually write but don't like these kind of tests. What do yo do about these situations ? Do you write tests for all the time ? UPDATE : I have changed the test name and code after complaints. Some users said that you should write tests for the trivial cases like this so in the future someone might add interesting behaviour. But what about “Code for today, design for tomorrow.” ? If someone, including myself, adds more interesting code in the future the test can be created for it then. Why should I do it now for the trivial cases ?

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • Kubuntu not showing eth0

    - by Laurbert515
    I just installed Kubuntu 12.04.2 and do not have an internet connection. I also cannot access the Desktop and only have the terminal. When I enter xlcock I get Error: Can't open display:. I believe this is because I do not have the correct drivers and need to download them ... using the internet which I can't access. So here's what's going on ... ifconfig gives: lo Link ecap: Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KB) TX bytes:1296 (1.2 KB) wlan0 Link encap: Ethernet HWaddr 68:17:29:58:49:4a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lspci -nnk gives: 07:00.0 Network Controller [0280] Intel Corporation Centrino Wireless-N 2230 [8086:0887] (rev c4) Subsystem: Intel Coporation Centrino Wireless-N 2230 BGN [8086:4062] Kernel driver in use: iwlwifi Kernel modules: iwlwifi 0d:00.0 Ethernet Controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 10) Subsystem: Toshiba America Info Systems Device [1179:fa77] I believe this means that it is using the ethernet connection but thinks it is a wireless connection. sudo lshw -class network gives: *-network description: Wireless interface product: Centrino Wireless=N 2230 ... *network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet I want to get the internet working so I can fix the GPU driver, etc. but I can't seem to get it working even though I have my ethernet cable plugged in (and I'm sure it is working).

    Read the article

  • One of my VMs went boom using Virtual Box and how it got fixed

    - by Enrique Lima
    I am running an HP Envy 15, 16GB and 500GB (7200 RPM) Hard drive. Had a VM configured from another environment, created the virtual machine config file on Virtual Box, everything seemed ok. Fired it up, and it was  s   l   o   w, it took close to 10 minutes for it to load, and about 5 more to see Windows was in the process of loading before the BSOD.  Thought, maybe, just maybe it will not happen again … oh was I wrong. Frustration had already hit an all time high with this configuration and the number of issues I’ve had. How I did the troubleshooting … The best thing to do (IMO) is to step back, and gather your tools to debug this situation. Tools:  Virtual Box command line tools, Windows Debug. Virtual Box comes with a pretty good set of tools to examine, migrate and overall tasks to deal with VMs. The firs step:  use VBoxManage to prevent the VM from rebooting after the error to get enough time to really dig into the BSOD issue. Command used:   VBoxManage setextradata VMNAME "VBoxInternal/PDM/HaltOnReset" 1 Once this was done, the error reported was an “Inaccessible boot device” coming from a “Stop – 7B” type of error on the BSOD. The issue I had with this, my VM was configured to use a virtual SATA controller, and thought Windows 2008 R2 would handle this fine … again wrong!  Because the integration tools from the other product where wanting to take effect that was throwing everything off. The fix The fix was almost handed to me, edited the configuration for the VM, removed the SATA controller from it, added the virtual hard drive under an IDE controller, boot up and voilà … it works! I was then able to install the Virtual Box guest tools and such, but have decided to favor “keep on working” over “let’s try SATA again”

    Read the article

  • Wireless does not work 12.10

    - by superkoop
    My primary issue is that my wireless does not work after I installed 12.10. The output to rfkill list all: 5: hci0: Bluetooth Soft blocked: no Hard blocked: no The output to lshw -class network is: *-network description: Ethernet interface product: 88E8040 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 12 serial: 00:21:9b:d6:46:51 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full ip=192.168.1.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:44 memory:fe8fc000-fe8fffff ioport:de00(size=256) *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0b:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:17 memory:fe7fc000-fe7fffff The output to lspci -nn for the pertinent information is: 0b:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] (rev 01) Thus, it seems the solution would be to run: sudo apt-get install linux-headers-generic sudo apt-get install --reinstall bcmwl-kernel-source sudo modprobe wl However, I do not currently have access to an ethernet connection, as I am currently only able to use verizon wireless 3g internet. Thus, is there a way to set up ICS with a Vista machine so that I can access the internet by using the Vista machine as the host? Or, is it possible to fix this by downloading the important packages in vista and moving them to ubuntu via USB drive?

    Read the article

  • ASP.Net MVC 2 DropDownListFor in EditorTemplate

    - by tschreck
    I have a view model that looks like this: namespace AutoForm.Models { public class ProductViewModel { [UIHint("DropDownList")] public String Category { get; set; } [ScaffoldColumn(false)] public IEnumerable<SelectListItem> CategoryList { get; set; } ... } } It has Category and CategoryList properties. The CategoryList is the source data for the Category dropdown UI element. I have an EditorTemplate that looks like this: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<ProductViewModel>" %> <%@ Import Namespace="AutoForm.Models"%> <%=Html.DropDownListFor(m => m.Category , Model.CategoryList ) %> NOTE: this EditorTemplate is strongly typed to ProductViewModel My Controller is populating CategoryList property with data from a database. I cannot get the DropDownListFor template to render a drop down list with data from CategoryList. I know CategoryList is getting populated with data in the controller because I see the data when I debug and step through the controller. Here's my error message in the browser: Server Error in '/' Application. Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: Line 2: <%@ Import Namespace="AutoForm.Models"% Line 3: Line 4: <%=Html.DropDownListFor(m = m.Category, Model.CategoryList) % Source File: c:\ProjectStore\AutoForm\AutoForm\Views\Shared\EditorTemplates\DropDownList.ascx Line: 4 Any ideas? Thanks Tom As a followup, I noticed that ViewData.Model is null when I'm stepping through the code in the EditorTemplate. I have the EditorTemplate strongly typed to "ProductViewModel" which is also the type that's passed to the View in the controller. I'm perplexed as to why ViewData.Model is null even though it's getting populated in the controller before getting passed to the view.

    Read the article

  • NSFetchedResultsController fetch request - updating predicate and UITableView

    - by Macatomy
    In my iPhone Core Data app I have it configured in a master-detail view setup. The master view is a UITableView that lists objects of the List entity. The List entity has a to-many relationship with the Task entity (called "tasks"), and the Task entity has an inverse to-one relationship with List called "list". When a List object is selected in the master view, I want the detail view (another UITableView) to list the Task objects that correspond to that List object. What I've done so far is this: In the detail view controller I've declared a property for a List object: @property (nonatomic, retain) List *list; Then in the master view controller I use this table view delegate method to set the list property of the detail view controller when a list is selected: - (void)tableView:(UITableView *)aTableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSManagedObject *selectedObject = [[self fetchedResultsController] objectAtIndexPath:indexPath]; detailViewController.list = (List*)selectedObject; } Then, I've overriden the setter for the list property in the detail view controller like this: - (void)setList:(List*)newList { if (list != newList) { [list release]; list = [newList retain]; NSPredicate *newPredicate = [NSPredicate predicateWithFormat:@"(list == %@)", list]; [NSFetchedResultsController deleteCacheWithName:@"Root"]; [[[self fetchedResultsController] fetchRequest] setPredicate:newPredicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } } } What I'm doing here is setting a predicate on the fetched results to filter out the objects so that I only get the ones that belong to the selected List object. The fetchedResultsController getter for the detail view controller looks like this: - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController == nil) { NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"FALSEPREDICATE"]; [fetchRequest setPredicate:predicate]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedResultsController; } Its almost unchanged from the default in the Core Data project template, the change I made is to add a predicate that always returns false, the reason being that when there is no List selected I don't want any items to be displayed in the detail view (if a list is selected the predicate is changed in the setter for the list property). However, when I select a list item, nothing really happens. Nothing in the table view changes, it stays empty. I'm sure my logic is flawed in several places, advice is appreciated Thanks

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Compile classfile issue in Spring 3

    - by Prajith R
    I have used spring framework 3 for my application. Everything is ok while developed in Netbeans But i need a custom build and done for the same The build created without any issue, but i got the following error The error occurred while calling the following method @RequestMapping(value = "/security/login", method = RequestMethod.POST) public ModelAndView login(@RequestParam String userName, @RequestParam String password, HttpServletRequest request) { ...................... But There is no issue while creating the war with netbeans (I am sure it is about the compilation issue) have you any experiance on this issue ... There is any additional javac argument for compile the same (netbeans used there custom task for the compilation) type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.web.bind.annotation.support.HandlerMethodInvocationException: Failed to invoke handler method [public org.springframework.web.servlet.ModelAndView com.mypackage.security.controller.LoginController.login(java.lang.String,java.lang.String,javax.servlet.http.HttpServletRequest)]; nested exception is java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:659) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) root cause org.springframework.web.bind.annotation.support.HandlerMethodInvocationException: Failed to invoke handler method [public org.springframework.web.servlet.ModelAndView com.mypackage.security.controller.LoginController.login(java.lang.String,java.lang.String,javax.servlet.http.HttpServletRequest)]; nested exception is java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:171) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) root cause java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.bind.annotation.support.HandlerMethodInvoker.getRequiredParameterName(HandlerMethodInvoker.java:618) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveRequestParam(HandlerMethodInvoker.java:417) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:277) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:163) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) note The full stack trace of the root cause is available in the Apache Tomcat/6.0.18 logs.

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • BRE (Business Rules Engine) Data Services is out...!!!

    - by Vishal
    A few months ago we at Tellago had open sourced the BizTalk Data Services. We were meanwhile working on other artifacts which comes along with BizTalk Server like the “Business Rules Engine”.  We are happy to announce the first version of BRE Data Services. BRE Data Services is a same concept which we covered through BTS Data Services, providing a RESTFul OData – based API to interact with the Business Rules Engine via HTTP using ATOM Publishing Protocol or JSON as the encoding mechanism.   In the first version release, we mainly focused on the browsing, querying and searching BRE artifacts via a RESTFul interface. Also along with that we provide the functionality to execute Business Rules by inserting the Facts for policies via the IUpdatable implementation of WCF Data Services.   The BRE Data Services API provides a lightweight interface for managing Business Rules Engine artifacts such as Policies, Rules, Vocabularies, Conditions, Actions, Facts etc. The following are some examples which details some of the available features in the current version of the API.   Basic Querying: Querying BRE Policies http://localhost/BREDataServices/BREMananagementService.svc/Policies Querying BRE Rules http://localhost/BREDataServices/BREMananagementService.svc/Rules Querying BRE Vocabularies http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies   Navigation: The BRE Data Services API also leverages WCF Data Services to enable navigation across related different BRE objects. Querying a specific Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies(‘PolicyName’) Querying a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules(‘RuleName’) Querying all Rules under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Rules Querying all Facts under a Policy http://localhost/BREDataServices/BREMananagementService.svc/Policies('PolicyName')/Facts Querying all Actions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying all Conditions for a specific Rule http://localhost/BREDataServices/BREMananagementService.svc/Rules('RuleName')/Actions Querying a specific Vocabulary: http://localhost/BREDataServices/BREMananagementService.svc/Vocabularies('VocabName')   Implementation: With the BRE Data Services, we also provide the functionality of executing a particular policy via HTTP. There are couple of ways you can do that though the API.   Ø First is though Service Operations feature of WCF Data Services in which you can execute the Facts by passing them in the URL itself. This is a very simple implementations of the executing the policies due to the limitations & restrictions (only primitive types of input parameters which can be passed) currently of the Service Operations of the WCF Data Services. Below is a code sample.                Below is a traced Request/Response message.                                 Ø Second is through the IUpdatable Interface of WCF Data Services. In this method, you can first query the rule which you want to execute and then inserts Facts for that particular Rules and finally when you perform the SaveChanges() call for the IUpdatable Interface API, it executes the policy with the facts which you inserted at runtime. Below is a sample of client side code. Due to the limitations of current version of WCF Data Services where there is no way you can return back the updates happening on the service side back to the client via the SaveChanges() method. Here we are executing the rule passing a serialized XML as Facts and there is no changes made to any data where we can query back to fetch the changes. This is overcome though the first way to executing the policies which is by executing it as a Service Operation call.     This actually generates a AtomPub message shown as below:   POST /Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/$batch HTTP/1.1 User-Agent: Microsoft ADO.NET Data Services DataServiceVersion: 1.0;NetFx MaxDataServiceVersion: 2.0;NetFx Accept: application/atom+xml,application/xml Accept-Charset: UTF-8 Content-Type: multipart/mixed; boundary=batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Host: localhost:8080 Content-Length: 1481 Expect: 100-continue   --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7 Content-Type: multipart/mixed; boundary=changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf   --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf Content-Type: application/http Content-Transfer-Encoding: binary   MERGE http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy') HTTP/1.1 Content-ID: 4 Content-Type: application/atom+xml;type=entry Content-Length: 927   <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" font-size: x-small"http://www.w3.org/2005/Atom">   <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="Tellago.BRE.REST.Resources.Fact" />   <title />   <author>     <name />   </author>   <updated>2011-01-31T20:09:15.0023982Z</updated>   <id>http://localhost:8080/Tellago.BRE.REST.ServiceHost/BREMananagementService.svc/Facts('TestPolicy')</id>   <content type="application/xml">     <m:properties>       <d:FactInstance>&lt;ns0:LoanStatus xmlns:ns0="http://tellago.com"&gt;&lt;Age&gt;10&lt;/Age&gt;&lt;Status&gt;true&lt;/Status&gt;&lt;/ns0:LoanStatus&gt;</d:FactInstance>       <d:FactType>TestSchema</d:FactType>       <d:ID>TestPolicy</d:ID>     </m:properties>   </content> </entry> --changeset_184a8c59-a714-4ba9-bb3d-889a88fe24bf-- --batch_6b9a5ced-5ecb-4585-940a-9d5e704c28c7—     Installation: The installation of the BRE Data Services is pretty straight forward. ·         Create a new IIS website say BREDataServices. ·         Download the SourceCode from TellagoCodeplex and copy the content from Tellago.BRE.REST.ServiceHost to the physical location of the above created website.     ·         The appPool account running the website should have admin access to the BizTalkRuleEngineDb database. ·         TheRight click the BREManagementService.svc in the IIS ContentView for the website and wala..     Conclusion: The BRE Data Services API is an experiment intended to bring the capabilities of RESTful/OData based services to the Traditional BTS/BRE Solutions. The future releases will target on technologies like BAM, ESB Toolkit. This version has been tested with various version of BizTalk Server and we have uploaded the source code to our Tellago's DevLabs workspace at Codeplex. I hope you guys enjoy this release. Keep an eye on our new releases @ Tellago Codeplex. We are working on various other Biztalk Artifacts like BAM, ESB Toolkit.     Till than happy BizzRuling…!!!     Thanks,   Vishal Mody

    Read the article

  • How do I create a Spring 3 + Tiles 2 webapp using REST-ful URLs?

    - by Ichiro Furusato
    I'm having a heck of a time resolving URLs with Spring 3.0 MVC. I'm just building a HelloWorld to try out how to build a RESTful webapp in Spring, nothing theoretically complicated. All of the examples I've been able to find are based on configurations that pay attention to file extensions ("*.htm" or "*.do"), include an artificial directory name prefix ("/foo") or even prefix paths with a dot (ugly), all approaches that use some artificial regex pattern as a signal to the resolver. For a REST approach I want to avoid all that muck and use only the natural URL patterns of my application. I would assume (perhaps incorrectly) that in web.xml I'd set a url-pattern of "/*" and pass everything to the DispatcherServlet for resolution, then just rely on URL patterns in my controller. I can't reliably get my resolver(s) to catch the URL patterns, and in all my trials this results in a resource not found error, a stack overflow (loop), or some kind of opaque Spring 3 ServletException stack trace — one of my ongoing frustrations with Spring generally is that the error messages are not often very helpful. I want to work with a Tiles 2 resolver. I've located my *.jsp files in WEB-INF/views/ and have a single line index.jsp file at the application root redirecting to the index file set by my layout.xml (the Tiles 2 Configurer). I do all the normal Spring 3 high-level configuration: <mvc:annotation-driven /> <mvc:view-controller path="/" view-name="index"/> <context:component-scan base-package="com.acme.web.controller" /> ...followed by all sorts of combinations and configurations of UrlBasedViewResolver, InternalResourceViewResolver, UrlFilenameViewController, etc. with all manner of variantions in my Tiles 2 configuration file. Then in my controller I've trying to pick up my URL patterns. Problem is, I can't reliably even get the resolver(s) to catch the patterns to send to my controller. This has now stretched to multiple days with no real progress on something I thought would be very simple to implement. I'm perhaps trying to do too much at once, though I would think this should be a simple (almost a default) configuration. I'm just trying to create a simple HelloWorld-type application, I wouldn't expect this is rocket science. Rather than me post my own configurations (which have ranged all over the map), does anyone know of an online example that: shows a simple Spring 3 MVC + Tiles 2 web application that uses REST-ful URLs (i.e., avoiding forced URL patterns such as file extensions, added directory names or dots) and relies solely on Spring 3 code/annotations (i.e., nothing outside of Spring MVC itself) to accomplish this? Is there an easy way to do this? Thanks very much for any help.

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Key Promoter for NetBeans

    - by Geertjan
    Whenever a menu item or toolbar button is clicked, it would be handy if NetBeans were to tell you 'hey, did you know, you can actually do this via the following keyboard shortcut', if a keyboard shortcut exists for the invoked action. After all, ultimately, a lot of developers would like to do everything with the keyboard and a key promoter feature of this kind is a helpful tool in learning the keyboard shortcuts related to the menu items and toolbar buttons you're clicking with your mouse. Above, you see the balloon message that appears for each menu item and toolbar button that you click and, below, you can see a list of all the actions that have been logged in the Notifications window. That happens automatically when an action is invoked (assuming the plugin described in this blog entry is installed), showing the display name of the action, together with the keyboard shortcut, which is presented as a hyperlink which, when clicked, re-invokes the action (which might not always be relevant, especially for context-sensitive actions, though for others it is quite useful, e.g., reopen the New Project wizard). And here's all the code. Notice that I'm hooking into the 'uigestures' functionality, which was suggested by Tim Boudreau, and I have added my own handler, which was suggested by Jaroslav Tulach, which gets a specific parameter from each new log entry handled by the 'org.netbeans.ui.actions' logger, makes sure that the parameter actually is an action, and then gets the relevant info from the action, if the relevant info exists: @OnShowingpublic class Startable implements Runnable {    @Override    public void run() {        Logger logger = Logger.getLogger("org.netbeans.ui.actions");        logger.addHandler(new StreamHandler() {            @Override            public void publish(LogRecord record) {                Object[] parameters = record.getParameters();                if (parameters[2] instanceof Action) {                    Action a = (Action) parameters[2];                    JMenuItem menu = new JMenuItem();                    Mnemonics.setLocalizedText(                            menu,                             a.getValue(Action.NAME).toString());                    String name = menu.getText();                    if (a.getValue(Action.ACCELERATOR_KEY) != null) {                        String accelerator = a.getValue(Action.ACCELERATOR_KEY).toString();                        NotificationDisplayer.getDefault().notify(                                name,                                 new ImageIcon("/org/nb/kp/car.png"),                                 accelerator,                                 new ActionListener() {                            @Override                            public void actionPerformed(ActionEvent e) {                                a.actionPerformed(e);                            }                        });                    }                }            }        });    }} Indeed, inspired by the Key Promoter in IntelliJ IDEA. Interested in trying it out? If there's interest in it, I'll put it in the NetBeans Plugin Portal.

    Read the article

  • Symfony2 Forms: is it possible to bind a form in an "unconventional way"?

    - by DonCallisto
    Imagine this scenario: in our company there is an employee that "play" around graphic,css,html and so on. Our new project will born under symfony2 so we're trying some silly - but "real" - stuff (like authentication from db, submit data from a form and persist it to db and so on..) The problem As far i know, learnt from symfony2 "book" that i found on the site (you can find it here), there is an "automated" way for creating and rendering forms: 1) Build the form up into a controller in this way $form = $this->createFormBuilder($task) ->add('task','text'), ->add('dueDate','date'), ->getForm(); return $this->render('pathToBundle:Controller:templateTwig', array('form'=>$form->createview()); 2) Into templateTwig render the template {{ form_widget(form) }} // or single rows method 3) Into a controller (the same that have a route where you can submit data), take back submitted information if($rquest->getMethod()=='POST'){ $form->bindRequest($request); /* and so on */ } Return to scenario Our graphic employee don't want to access controllers, write php and other stuff like those. So he'll write a twig template with a "unconventional" (from symfony2 point of view, but conventional from HTML point of view) method: /* into twig template */ <form action="{{ path('SestanteUserBundle_homepage') }}" method="post" name="userForm"> <div> USERNAME: <input type="text" name="user_name" value="{{ user.username}}"/> </div> <div> EMAIL: <input type="text" name="user_mail" value="{{ user.email }}"/> </div> <input type="hidden" name="user_id" value="{{ id }}" /> <input type="submit" value="modifica i dati"> </form> Now, if into the controller that handle the submission of data we do something like that public function indexAction(Request $request) { if($request->getMethod() == 'POST'){ // sono arrivato per via di un submit, quindi devo modificare i dati prima di farli vedere a video $defaultData = array('message'=>'ho visto questa cosa in esempio, ma non capisco se posso farne a meno'); $form = $this->createFormBuilder($defaultData) ->add('user_name','text') ->add('user_mail','email') ->add('user_id','integer') ->getForm(); $form->bindRequest($request); //bindo la form ad una request $data = $form->getData(); //mi aspetto un'array chiave=>valore /* .... */ We expected that $data will contain an array with key,value from the submitted form. We found that it isn't true. After googling for a while and try with other "bad" ideas, we're frozen into that. So, if you have a "graphic office" that can't handle directly php code, how can we interface from form(s) to controller(s) ? UPDATE It seems that Symfony2 use a different convention for form's field name and lookup once you've submitted that. In particular, if my form's name is addUser and a field is named userName, the field's name will be AddUser[username] so maybe it have a "dynamic" lookup method that will extract form's name, field's name, concat them and lookup for values. Is it possible?

    Read the article

  • How to set default xrandr settings?

    - by echo-flow
    I'm trying to enable dual monitors in Ubuntu. This is working fine, but every time I do it, desktop effects is disabled. I think I've found the reason why, though: https://wiki.ubuntu.com/X/Config/Multihead/ As with the GNOME XRandR configuration method, setting Virtual to too large a value may result in a loss of hardware acceleration, and thus an inability to use Compiz and its desktop effects. When I use the GNOME monitor applet, or the Monitors configuration in the System menu, the default xrandr settings puts the second monitor to the right of the first, and, as I found with this bug, for most monitors this creates a virtual desktop larger than the maximum 2048 horizontal resolution needed for hardware acceleration on my netbook hardware. So, it seems like if I can modify xrandr's default settings so that it places the new desktop above or below (north or south of) the main LVDS display, then hardware acceleration, and therefore compiz will continue to work. Can anyone tell me, what is the easiest way to achieve this? UPDATE: I have confirmed that multihead support with desktop effects and hardware acceleration works when I move the external monitor display north of the main LVDS display. Right now this involves the following process: plugging in the external monitor, starting the Monitors configuration menu, desktop effects are disabled automatically (and all of the windows on my workspaces are moved to the first workspace), repositioning the external display so that it is north of LVDS display and clicking apply, and then navigating to the Appearance menu and telling it to reenable desktop effects. Is there a simpler way do this? UPDATE 2: OK, so I thought that perhaps the GNOME Monitors configuration screen was trying to be clever, and might be disbling desktop effects. So, I just tried using the xrandr command-line client instead, as follows: xrandr --output VGA1 --above LVDS1 When I do that, desktop effects are still disabled, and I need to manually reenable them. This, despite the fact that hardware acceleration works, and there is never a point where hardware acceleration stops working because the horizontal dimension of the virtual display is too large. So what program is trying to be clever, and is turning off desktop effects when it doesn't need to? And how do I make it stop? If there were a way to re-enable desktop effects from the command line, which I could then put into a script along with the proper xrandr invocation, I would accept that as a workaround. UPDATE 3: OK, here's my script to enable a second monitor with desktop effects. It might be evil, I'm not sure: second-monitor.sh xrandr --output VGA1 --above LVDS1 sleep 3 compiz --replace & The sleep statement might not be necessary. If there's a better way to do this, please let me know. UPDATE 4: This is a Dell Mini Inspiron 1012. Here are my system specifications: lspci -vv 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 29 Region 0: Memory at f0b00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at 18d0 [size=8] Region 2: Memory at d0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at f0900000 (32-bit, non-prefetchable) [size=1M] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at f0b80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> lsmod | grep i915 i915 287458 2 drm_kms_helper 29329 1 i915 drm 162409 3 i915,drm_kms_helper intel_agp 24375 2 i915 i2c_algo_bit 5028 1 i915 video 17375 1 i915

    Read the article

  • Introducing MySQL for Excel

    - by Javier Treviño
    As part of the new product initiatives of the MySQL on Windows group we released a tool that makes the task of getting data in and out of a MySQL Database very friendly and intuitive, and we paired it with one of the preferred applications for data analysis and manipulation in Windows platforms, MS Excel. Welcome to MySQL for Excel, an add-in that is installed and accessed from within the MS Excel’s Data tab offering a wizard-like interface arranged in an elegant yet simple way to help users browse MySQL Schemas, Tables, Views and Procedures and perform data operations against them using MS Excel as the vehicle to drive the data in and out MySQL Databases. One of the coolest features we had in mind designing MySQL for Excel is simplicity. MS Excel is simple and easy to work with, thus liked by many Windows users because they don’t have to be software gurus to use it.  We applied the same principle by targeting MySQL for Excel to any kind of user, so if you are already familiarized with Excel’s interface you will find yourself working with MySQL data in no time. MySQL for Excel is shipped within the MySQL Installer as one of the tools in the suite; if prerequisites are already installed (.NET Framework 4.0, Visual Studio Tools for Office 4.0 and of course MS Office), installing the add-in involves a very few clicks and no further setup to use it. Being an Excel Add-In there is no executable file involved after the installation, running MS Excel and opening the add-in from its Data tab is all that is required. MySQL for Excel automatically integrates with MySQL Workbench (if installed) to share the same connections to MySQL Server installations, that way connections are defined just once in either product saving time.  Opening the Add-In brings the Welcome Panel at the right side of the Excel main window from which connections to MySQL Servers are shown grouped by Local VS Remote connections; then users can open any of those connections by double-clicking it and entering the password of the used account.  Additionally a user can create a connection by clicking on the New Connection action label or edit connections through MySQL Workbench (if installed) by clicking on the Manage Connections action label. Once a connection is opened, the Schema Selection panel is shown, at the top of it the selected connection (connection name, hostname/IP and username). Just below, a list of schemas is displayed where User Schemas are grouped first followed by System Schemas; users can double-click any selected schema to go to the next panel or select a schema and clicking the Next > button. Users can alternatively click on the < Back button to go back to the Welcome Panel to close the current connection and open a new one; also by clicking the Create New Schema action label they can create an empty new schema. Once a schema is opened the DB Object Selection panel is shown, this is actually the place where the fun stuff happens; from here users are able to perform actions against MySQL Tables, Views and Procedures. ">The actions available here are about importing data from a MySQL Table, View or Procedure to Excel, exporting Excel data to a new MySQL Table, appending Excel data to an existing MySQL Table or editing a MySQL Table’s data by using an Excel Worksheet as a user interface to update data in any row/column, insert new rows or delete existing rows in a very easy and friendly way. More blog posts will follow describing all of these actions, so stay tuned! Remember that your feedback is very important for us, so drop us a message: · MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ · Forum - http://forums.mysql.com/list.php?172 · Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • How to make Ubuntu recognize an unknown external display (so I can adjust its resolution)?

    - by WagnerAA
    I have a Dell laptop with an external monitor attached (a Samsumg SyncMaster 931c). My laptop display was recognized, and I can adjust its optimum resolution. My external display is still unknown, thus I'm stuck at a lower resolution (1024x768): I tried the "Detect Displays" button, but it didn't work, nothing happens. I recently upgraded from Ubuntu 12.04 to 12.10. Things were working before. I don't know if I can actually change this configuration, or if this is a bug. I searched for an answer here and also in Launchpad's website, but found none. I even tried to install Nvidia drivers, and just messed things up. It seems I wasn't even using nvidia before, as I guessed by looking at my additional drivers configuration: My laptop has an Intel chipset, I guess: $ dpkg --get-selections | grep -i -e nvidia -e intel intel-gpu-tools install libdrm-intel1:amd64 install libdrm-intel1:i386 install nvidia-common install xserver-xorg-video-intel install I don't have an xorg.conf file (I think this is nvidia related, am I right?): $ cat /etc/X11/xorg.conf cat: /etc/X11/xorg.conf: No such file or directory $ ls -l /etc/X11/ total 76 drwxr-xr-x 2 root root 4096 Out 19 23:41 app-defaults drwxr-xr-x 2 root root 4096 Abr 25 2012 cursors -rw-r--r-- 1 root root 18 Abr 25 2012 default-display-manager drwxr-xr-x 4 root root 4096 Abr 25 2012 fonts -rw-r--r-- 1 root root 17394 Dez 3 2009 rgb.txt lrwxrwxrwx 1 root root 13 Mai 1 03:33 X -> /usr/bin/Xorg drwxr-xr-x 3 root root 4096 Out 19 23:41 xinit drwxr-xr-x 2 root root 4096 Jan 23 2012 xkb -rw-r--r-- 1 root root 0 Out 24 08:55 xorg.conf.nvidia-xconfig-original -rwxr-xr-x 1 root root 709 Abr 1 2010 Xreset drwxr-xr-x 2 root root 4096 Out 19 10:08 Xreset.d drwxr-xr-x 2 root root 4096 Out 19 10:08 Xresources -rwxr-xr-x 1 root root 3730 Jan 20 2012 Xsession drwxr-xr-x 2 root root 4096 Out 20 00:11 Xsession.d -rw-r--r-- 1 root root 265 Jul 1 2008 Xsession.options -rw-r--r-- 1 root root 13 Ago 15 06:43 XvMCConfig -rw-r--r-- 1 root root 601 Abr 25 2012 Xwrapper.config Here is some information I gathered by looking at other related posts: $ sudo lshw -C display; lsb_release -a; uname -a *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:48 memory:f6800000-f6bfffff memory:d0000000-dfffffff ioport:1800(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6100000-f61fffff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:cxx-3.0-amd64:cxx-3.0-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:cxx-4.0-amd64:cxx-4.0-noarch:desktop-3.1-amd64:desktop-3.1-noarch:desktop-3.2-amd64:desktop-3.2-noarch:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.0-amd64:graphics-3.0-noarch:graphics-3.1-amd64:graphics-3.1-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-3.2-amd64:printing-3.2-noarch:printing-4.0-amd64:printing-4.0-noarch:qt4-3.1-amd64:qt4-3.1-noarch Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal Linux Batcave 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ xrandr -q Screen 0: minimum 320 x 200, current 2304 x 800, maximum 32767 x 32767 LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 286mm x 1790mm 1280x800 59.9*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+1280+32 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) If there's anything else I can do, any other information I can post here, to help me configure this external display, please let me know. If this is actually a bug, I apologize (I know bugs are not allowed here), but I really wasn't sure. And I will promptly file a bug report in Launchpad if that's the case. Thanks a lot in advance. ;)

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >