Search Results

Search found 9578 results on 384 pages for 'angularjs controller'.

Page 107/384 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • Wireless keeps disabling or stays disconnected (Realtek RTL8191SEvB)

    - by jindrichm
    I have Realtek RTL8191SEvB wireless card on Ubuntu 10.10: $ lspci -v | grep Network 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8191SEvB Wireless LAN Controller (rev 10) When I load its driver, according to the Network Manager it sometimes blinks with a list of available networks but it keeps disabling itself or it stays disconnected. So, I can't connect to any wi-fi network (which results in frustration). The driver is loaded: $ lsmod Module Size Used by r8192se_pci 509932 0 Looks normal: $ sudo lshw -C network *-network description: Wireless interface product: RTL8191SEvB Wireless LAN Controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 10 serial: 1c:65:9d:60:c7:7a width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl819xSE driverversion=0019.1207.2010 firmware=63 latency=0 link=no multicast=yes wireless=802.11bgn resources: irq:17 ioport:2000(size=256) memory:f0500000-f0503fff Configured: $ sudo iwconfig wlan0 wlan0 802.11bgn Nickname:"rtl8191SEVA2" Mode:Managed Frequency=2.412 GHz Access Point: Not-Associated Bit Rate:130 Mb/s Retry:on RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=10/100 Signal level=0 dBm Noise level=-100 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 Is not blocked: $ rfkill list all 0: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: yes However something's happening with it: $ dmesg [ 6485.948668] InitializeAdapter8190(): ==++==> Turn off RF for RfOffReason(1073741824) ---------- [ 6486.062666] rtl8192_SetWirelessMode(), wireless_mode:10, bEnableHT = 1 [ 6486.062671] InitializeAdapter8192SE(): Set MRC settings on as default!! [ 6486.062675] HW_VAR_MRC: Turn on 1T1R MRC! [ 6486.064091] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 6486.248761] rtl8192_SetWirelessMode(), wireless_mode:10, bEnableHT = 1 [ 6486.248771] InitializeAdapter8192SE(): Set MRC settings on as default!! [ 6486.248776] HW_VAR_MRC: Turn on 1T1R MRC! [ 6486.580083] GPIOChangeRF - HW Radio OFF [ 6486.610085] ============>sync_scan_hurryup out [ 6486.623814] ================>r8192_wx_set_scan(): hwradio off [ 6486.830484] =========>r8192_wx_set_essid():hw radio off,or Rf state is eRfOff, return So, does anyone know where the problem might be?

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • Do you write unit tests for all the time in TDD?

    - by mcaaltuntas
    I have been designing and developing code with TDD style for a long time. What disturbs me about TDD is writing tests for code that does not contain any business logic or interesting behaviour. I know TDD is a design activity more than testing but sometimes I feel it's useless to write tests in these scenarios. For example I have a simple scenario like "When user clicks check button, it should check file's validity". For this scenario I usually start writing tests for presenter/controller class like the one below. @Test public void when_user_clicks_check_it_should_check_selected_file_validity(){ MediaService service =mock(MediaService); View view =mock(View); when(view.getSelectedFile).thenReturns("c:\\Dir\\file.avi"); MediaController controller =new MediaController(service,view); controller.check(); verify(service).check("c:\\Dir\\file.avi"); } As you can see there is no design decision or interesting code to verify behaviour. I am testing values from view passed to MediaService. I usually write but don't like these kind of tests. What do yo do about these situations ? Do you write tests for all the time ? UPDATE : I have changed the test name and code after complaints. Some users said that you should write tests for the trivial cases like this so in the future someone might add interesting behaviour. But what about “Code for today, design for tomorrow.” ? If someone, including myself, adds more interesting code in the future the test can be created for it then. Why should I do it now for the trivial cases ?

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • Kubuntu not showing eth0

    - by Laurbert515
    I just installed Kubuntu 12.04.2 and do not have an internet connection. I also cannot access the Desktop and only have the terminal. When I enter xlcock I get Error: Can't open display:. I believe this is because I do not have the correct drivers and need to download them ... using the internet which I can't access. So here's what's going on ... ifconfig gives: lo Link ecap: Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KB) TX bytes:1296 (1.2 KB) wlan0 Link encap: Ethernet HWaddr 68:17:29:58:49:4a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lspci -nnk gives: 07:00.0 Network Controller [0280] Intel Corporation Centrino Wireless-N 2230 [8086:0887] (rev c4) Subsystem: Intel Coporation Centrino Wireless-N 2230 BGN [8086:4062] Kernel driver in use: iwlwifi Kernel modules: iwlwifi 0d:00.0 Ethernet Controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 10) Subsystem: Toshiba America Info Systems Device [1179:fa77] I believe this means that it is using the ethernet connection but thinks it is a wireless connection. sudo lshw -class network gives: *-network description: Wireless interface product: Centrino Wireless=N 2230 ... *network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet I want to get the internet working so I can fix the GPU driver, etc. but I can't seem to get it working even though I have my ethernet cable plugged in (and I'm sure it is working).

    Read the article

  • One of my VMs went boom using Virtual Box and how it got fixed

    - by Enrique Lima
    I am running an HP Envy 15, 16GB and 500GB (7200 RPM) Hard drive. Had a VM configured from another environment, created the virtual machine config file on Virtual Box, everything seemed ok. Fired it up, and it was  s   l   o   w, it took close to 10 minutes for it to load, and about 5 more to see Windows was in the process of loading before the BSOD.  Thought, maybe, just maybe it will not happen again … oh was I wrong. Frustration had already hit an all time high with this configuration and the number of issues I’ve had. How I did the troubleshooting … The best thing to do (IMO) is to step back, and gather your tools to debug this situation. Tools:  Virtual Box command line tools, Windows Debug. Virtual Box comes with a pretty good set of tools to examine, migrate and overall tasks to deal with VMs. The firs step:  use VBoxManage to prevent the VM from rebooting after the error to get enough time to really dig into the BSOD issue. Command used:   VBoxManage setextradata VMNAME "VBoxInternal/PDM/HaltOnReset" 1 Once this was done, the error reported was an “Inaccessible boot device” coming from a “Stop – 7B” type of error on the BSOD. The issue I had with this, my VM was configured to use a virtual SATA controller, and thought Windows 2008 R2 would handle this fine … again wrong!  Because the integration tools from the other product where wanting to take effect that was throwing everything off. The fix The fix was almost handed to me, edited the configuration for the VM, removed the SATA controller from it, added the virtual hard drive under an IDE controller, boot up and voilà … it works! I was then able to install the Virtual Box guest tools and such, but have decided to favor “keep on working” over “let’s try SATA again”

    Read the article

  • Wireless does not work 12.10

    - by superkoop
    My primary issue is that my wireless does not work after I installed 12.10. The output to rfkill list all: 5: hci0: Bluetooth Soft blocked: no Hard blocked: no The output to lshw -class network is: *-network description: Ethernet interface product: 88E8040 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 12 serial: 00:21:9b:d6:46:51 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full ip=192.168.1.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:44 memory:fe8fc000-fe8fffff ioport:de00(size=256) *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0b:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:17 memory:fe7fc000-fe7fffff The output to lspci -nn for the pertinent information is: 0b:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] (rev 01) Thus, it seems the solution would be to run: sudo apt-get install linux-headers-generic sudo apt-get install --reinstall bcmwl-kernel-source sudo modprobe wl However, I do not currently have access to an ethernet connection, as I am currently only able to use verizon wireless 3g internet. Thus, is there a way to set up ICS with a Vista machine so that I can access the internet by using the Vista machine as the host? Or, is it possible to fix this by downloading the important packages in vista and moving them to ubuntu via USB drive?

    Read the article

  • ASP.Net MVC 2 DropDownListFor in EditorTemplate

    - by tschreck
    I have a view model that looks like this: namespace AutoForm.Models { public class ProductViewModel { [UIHint("DropDownList")] public String Category { get; set; } [ScaffoldColumn(false)] public IEnumerable<SelectListItem> CategoryList { get; set; } ... } } It has Category and CategoryList properties. The CategoryList is the source data for the Category dropdown UI element. I have an EditorTemplate that looks like this: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<ProductViewModel>" %> <%@ Import Namespace="AutoForm.Models"%> <%=Html.DropDownListFor(m => m.Category , Model.CategoryList ) %> NOTE: this EditorTemplate is strongly typed to ProductViewModel My Controller is populating CategoryList property with data from a database. I cannot get the DropDownListFor template to render a drop down list with data from CategoryList. I know CategoryList is getting populated with data in the controller because I see the data when I debug and step through the controller. Here's my error message in the browser: Server Error in '/' Application. Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: Line 2: <%@ Import Namespace="AutoForm.Models"% Line 3: Line 4: <%=Html.DropDownListFor(m = m.Category, Model.CategoryList) % Source File: c:\ProjectStore\AutoForm\AutoForm\Views\Shared\EditorTemplates\DropDownList.ascx Line: 4 Any ideas? Thanks Tom As a followup, I noticed that ViewData.Model is null when I'm stepping through the code in the EditorTemplate. I have the EditorTemplate strongly typed to "ProductViewModel" which is also the type that's passed to the View in the controller. I'm perplexed as to why ViewData.Model is null even though it's getting populated in the controller before getting passed to the view.

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • NSFetchedResultsController fetch request - updating predicate and UITableView

    - by Macatomy
    In my iPhone Core Data app I have it configured in a master-detail view setup. The master view is a UITableView that lists objects of the List entity. The List entity has a to-many relationship with the Task entity (called "tasks"), and the Task entity has an inverse to-one relationship with List called "list". When a List object is selected in the master view, I want the detail view (another UITableView) to list the Task objects that correspond to that List object. What I've done so far is this: In the detail view controller I've declared a property for a List object: @property (nonatomic, retain) List *list; Then in the master view controller I use this table view delegate method to set the list property of the detail view controller when a list is selected: - (void)tableView:(UITableView *)aTableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSManagedObject *selectedObject = [[self fetchedResultsController] objectAtIndexPath:indexPath]; detailViewController.list = (List*)selectedObject; } Then, I've overriden the setter for the list property in the detail view controller like this: - (void)setList:(List*)newList { if (list != newList) { [list release]; list = [newList retain]; NSPredicate *newPredicate = [NSPredicate predicateWithFormat:@"(list == %@)", list]; [NSFetchedResultsController deleteCacheWithName:@"Root"]; [[[self fetchedResultsController] fetchRequest] setPredicate:newPredicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } } } What I'm doing here is setting a predicate on the fetched results to filter out the objects so that I only get the ones that belong to the selected List object. The fetchedResultsController getter for the detail view controller looks like this: - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController == nil) { NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"FALSEPREDICATE"]; [fetchRequest setPredicate:predicate]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; } return fetchedResultsController; } Its almost unchanged from the default in the Core Data project template, the change I made is to add a predicate that always returns false, the reason being that when there is no List selected I don't want any items to be displayed in the detail view (if a list is selected the predicate is changed in the setter for the list property). However, when I select a list item, nothing really happens. Nothing in the table view changes, it stays empty. I'm sure my logic is flawed in several places, advice is appreciated Thanks

    Read the article

  • Using Durandal to Create Single Page Apps

    - by Stephen.Walther
    A few days ago, I gave a talk on building Single Page Apps on the Microsoft Stack. In that talk, I recommended that people use Knockout, Sammy, and RequireJS to build their presentation layer and use the ASP.NET Web API to expose data from their server. After I gave the talk, several people contacted me and suggested that I investigate a new open-source JavaScript library named Durandal. Durandal stitches together Knockout, Sammy, and RequireJS to make it easier to use these technologies together. In this blog entry, I want to provide a brief walkthrough of using Durandal to create a simple Single Page App. I am going to demonstrate how you can create a simple Movies App which contains (virtual) pages for viewing a list of movies, adding new movies, and viewing movie details. The goal of this blog entry is to give you a sense of what it is like to build apps with Durandal. Installing Durandal First things first. How do you get Durandal? The GitHub project for Durandal is located here: https://github.com/BlueSpire/Durandal The Wiki — located at the GitHub project — contains all of the current documentation for Durandal. Currently, the documentation is a little sparse, but it is enough to get you started. Instead of downloading the Durandal source from GitHub, a better option for getting started with Durandal is to install one of the Durandal NuGet packages. I built the Movies App described in this blog entry by first creating a new ASP.NET MVC 4 Web Application with the Basic Template. Next, I executed the following command from the Package Manager Console: Install-Package Durandal.StarterKit As you can see from the screenshot of the Package Manager Console above, the Durandal Starter Kit package has several dependencies including: · jQuery · Knockout · Sammy · Twitter Bootstrap The Durandal Starter Kit package includes a sample Durandal application. You can get to the Starter Kit app by navigating to the Durandal controller. Unfortunately, when I first tried to run the Starter Kit app, I got an error because the Starter Kit is hard-coded to use a particular version of jQuery which is already out of date. You can fix this issue by modifying the App_Start\DurandalBundleConfig.cs file so it is jQuery version agnostic like this: bundles.Add( new ScriptBundle("~/scripts/vendor") .Include("~/Scripts/jquery-{version}.js") .Include("~/Scripts/knockout-{version}.js") .Include("~/Scripts/sammy-{version}.js") // .Include("~/Scripts/jquery-1.9.0.min.js") // .Include("~/Scripts/knockout-2.2.1.js") // .Include("~/Scripts/sammy-0.7.4.min.js") .Include("~/Scripts/bootstrap.min.js") ); The recommendation is that you create a Durandal app in a folder off your project root named App. The App folder in the Starter Kit contains the following subfolders and files: · durandal – This folder contains the actual durandal JavaScript library. · viewmodels – This folder contains all of your application’s view models. · views – This folder contains all of your application’s views. · main.js — This file contains all of the JavaScript startup code for your app including the client-side routing configuration. · main-built.js – This file contains an optimized version of your application. You need to build this file by using the RequireJS optimizer (unfortunately, before you can run the optimizer, you must first install NodeJS). For the purpose of this blog entry, I wanted to start from scratch when building the Movies app, so I deleted all of these files and folders except for the durandal folder which contains the durandal library. Creating the ASP.NET MVC Controller and View A Durandal app is built using a single server-side ASP.NET MVC controller and ASP.NET MVC view. A Durandal app is a Single Page App. When you navigate between pages, you are not navigating to new pages on the server. Instead, you are loading new virtual pages into the one-and-only-one server-side view. For the Movies app, I created the following ASP.NET MVC Home controller: public class HomeController : Controller { public ActionResult Index() { return View(); } } There is nothing special about the Home controller – it is as basic as it gets. Next, I created the following server-side ASP.NET view. This is the one-and-only server-side view used by the Movies app: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that I set the Layout property for the view to the value null. If you neglect to do this, then the default ASP.NET MVC layout will be applied to the view and you will get the <!DOCTYPE> and opening and closing <html> tags twice. Next, notice that the view contains a DIV element with the Id applicationHost. This marks the area where virtual pages are loaded. When you navigate from page to page in a Durandal app, HTML page fragments are retrieved from the server and stuck in the applicationHost DIV element. Inside the applicationHost element, you can place any content which you want to display when a Durandal app is starting up. For example, you can create a fancy splash screen. I opted for simply displaying the text “Loading app…”: Next, notice the view above includes a call to the Scripts.Render() helper. This helper renders out all of the JavaScript files required by the Durandal library such as jQuery and Knockout. Remember to fix the App_Start\DurandalBundleConfig.cs as described above or Durandal will attempt to load an old version of jQuery and throw a JavaScript exception and stop working. Your application JavaScript code is not included in the scripts rendered by the Scripts.Render helper. Your application code is loaded dynamically by RequireJS with the help of the following SCRIPT element located at the bottom of the view: <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> The data-main attribute on the SCRIPT element causes RequireJS to load your /app/main.js JavaScript file to kick-off your Durandal app. Creating the Durandal Main.js File The Durandal Main.js JavaScript file, located in your App folder, contains all of the code required to configure the behavior of Durandal. Here’s what the Main.js file looks like in the case of the Movies app: require.config({ paths: { 'text': 'durandal/amd/text' } }); define(function (require) { var app = require('durandal/app'), viewLocator = require('durandal/viewLocator'), system = require('durandal/system'), router = require('durandal/plugins/router'); //>>excludeStart("build", true); system.debug(true); //>>excludeEnd("build"); app.start().then(function () { //Replace 'viewmodels' in the moduleId with 'views' to locate the view. //Look for partial views in a 'views' folder in the root. viewLocator.useConvention(); //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id"); app.adaptToDevice(); //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); }); }); There are three important things to notice about the main.js file above. First, notice that it contains a section which enables debugging which looks like this: //>>excludeStart(“build”, true); system.debug(true); //>>excludeEnd(“build”); This code enables debugging for your Durandal app which is very useful when things go wrong. When you call system.debug(true), Durandal writes out debugging information to your browser JavaScript console. For example, you can use the debugging information to diagnose issues with your client-side routes: (The funny looking //> symbols around the system.debug() call are RequireJS optimizer pragmas). The main.js file is also the place where you configure your client-side routes. In the case of the Movies app, the main.js file is used to configure routes for three page: the movies show, add, and details pages. //configure routing router.useConvention(); router.mapNav("movies/show"); router.mapNav("movies/add"); router.mapNav("movies/details/:id");   The route for movie details includes a route parameter named id. Later, we will use the id parameter to lookup and display the details for the right movie. Finally, the main.js file above contains the following line of code: //Show the app by setting the root view model for our application with a transition. app.setRoot('viewmodels/shell', 'entrance'); This line of code causes Durandal to load up a JavaScript file named shell.js and an HTML fragment named shell.html. I’ll discuss the shell in the next section. Creating the Durandal Shell You can think of the Durandal shell as the layout or master page for a Durandal app. The shell is where you put all of the content which you want to remain constant as a user navigates from virtual page to virtual page. For example, the shell is a great place to put your website logo and navigation links. The Durandal shell is composed from two parts: a JavaScript file and an HTML file. Here’s what the HTML file looks like for the Movies app: <h1>Movies App</h1> <div class="container-fluid page-host"> <!--ko compose: { model: router.activeItem, //wiring the router afterCompose: router.afterCompose, //wiring the router transition:'entrance', //use the 'entrance' transition when switching views cacheViews:true //telling composition to keep views in the dom, and reuse them (only a good idea with singleton view models) }--><!--/ko--> </div> And here is what the JavaScript file looks like: define(function (require) { var router = require('durandal/plugins/router'); return { router: router, activate: function () { return router.activate('movies/show'); } }; }); The JavaScript file contains the view model for the shell. This view model returns the Durandal router so you can access the list of configured routes from your shell. Notice that the JavaScript file includes a function named activate(). This function loads the movies/show page as the first page in the Movies app. If you want to create a different default Durandal page, then pass the name of a different age to the router.activate() method. Creating the Movies Show Page Durandal pages are created out of a view model and a view. The view model contains all of the data and view logic required for the view. The view contains all of the HTML markup for rendering the view model. Let’s start with the movies show page. The movies show page displays a list of movies. The view model for the show page looks like this: define(function (require) { var moviesRepository = require("repositories/moviesRepository"); return { movies: ko.observable(), activate: function() { this.movies(moviesRepository.listMovies()); } }; }); You create a view model by defining a new RequireJS module (see http://requirejs.org). You create a RequireJS module by placing all of your JavaScript code into an anonymous function passed to the RequireJS define() method. A RequireJS module has two parts. You retrieve all of the modules which your module requires at the top of your module. The code above depends on another RequireJS module named repositories/moviesRepository. Next, you return the implementation of your module. The code above returns a JavaScript object which contains a property named movies and a method named activate. The activate() method is a magic method which Durandal calls whenever it activates your view model. Your view model is activated whenever you navigate to a page which uses it. In the code above, the activate() method is used to get the list of movies from the movies repository and assign the list to the view model movies property. The HTML for the movies show page looks like this: <table> <thead> <tr> <th>Title</th><th>Director</th> </tr> </thead> <tbody data-bind="foreach:movies"> <tr> <td data-bind="text:title"></td> <td data-bind="text:director"></td> <td><a data-bind="attr:{href:'#/movies/details/'+id}">Details</a></td> </tr> </tbody> </table> <a href="#/movies/add">Add Movie</a> Notice that this is an HTML fragment. This fragment will be stuffed into the page-host DIV element in the shell.html file which is stuffed, in turn, into the applicationHost DIV element in the server-side MVC view. The HTML markup above contains data-bind attributes used by Knockout to display the list of movies (To learn more about Knockout, visit http://knockoutjs.com). The list of movies from the view model is displayed in an HTML table. Notice that the page includes a link to a page for adding a new movie. The link uses the following URL which starts with a hash: #/movies/add. Because the link starts with a hash, clicking the link does not cause a request back to the server. Instead, you navigate to the movies/add page virtually. Creating the Movies Add Page The movies add page also consists of a view model and view. The add page enables you to add a new movie to the movie database. Here’s the view model for the add page: define(function (require) { var app = require('durandal/app'); var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToAdd: { title: ko.observable(), director: ko.observable() }, activate: function () { this.movieToAdd.title(""); this.movieToAdd.director(""); this._movieAdded = false; }, canDeactivate: function () { if (this._movieAdded == false) { return app.showMessage('Are you sure you want to leave this page?', 'Navigate', ['Yes', 'No']); } else { return true; } }, addMovie: function () { // Add movie to db moviesRepository.addMovie(ko.toJS(this.movieToAdd)); // flag new movie this._movieAdded = true; // return to list of movies router.navigateTo("#/movies/show"); } }; }); The view model contains one property named movieToAdd which is bound to the add movie form. The view model also has the following three methods: 1. activate() – This method is called by Durandal when you navigate to the add movie page. The activate() method resets the add movie form by clearing out the movie title and director properties. 2. canDeactivate() – This method is called by Durandal when you attempt to navigate away from the add movie page. If you return false then navigation is cancelled. 3. addMovie() – This method executes when the add movie form is submitted. This code adds the new movie to the movie repository. I really like the Durandal canDeactivate() method. In the code above, I use the canDeactivate() method to show a warning to a user if they navigate away from the add movie page – either by clicking the Cancel button or by hitting the browser back button – before submitting the add movie form: The view for the add movie page looks like this: <form data-bind="submit:addMovie"> <fieldset> <legend>Add Movie</legend> <div> <label> Title: <input data-bind="value:movieToAdd.title" required /> </label> </div> <div> <label> Director: <input data-bind="value:movieToAdd.director" required /> </label> </div> <div> <input type="submit" value="Add" /> <a href="#/movies/show">Cancel</a> </div> </fieldset> </form> I am using Knockout to bind the movieToAdd property from the view model to the INPUT elements of the HTML form. Notice that the FORM element includes a data-bind attribute which invokes the addMovie() method from the view model when the HTML form is submitted. Creating the Movies Details Page You navigate to the movies details Page by clicking the Details link which appears next to each movie in the movies show page: The Details links pass the movie ids to the details page: #/movies/details/0 #/movies/details/1 #/movies/details/2 Here’s what the view model for the movies details page looks like: define(function (require) { var router = require('durandal/plugins/router'); var moviesRepository = require("repositories/moviesRepository"); return { movieToShow: { title: ko.observable(), director: ko.observable() }, activate: function (context) { // Grab movie from repository var movie = moviesRepository.getMovie(context.id); // Add to view model this.movieToShow.title(movie.title); this.movieToShow.director(movie.director); } }; }); Notice that the view model activate() method accepts a parameter named context. You can take advantage of the context parameter to retrieve route parameters such as the movie Id. In the code above, the context.id property is used to retrieve the correct movie from the movie repository and the movie is assigned to a property named movieToShow exposed by the view model. The movie details view displays the movieToShow property by taking advantage of Knockout bindings: <div> <h2 data-bind="text:movieToShow.title"></h2> directed by <span data-bind="text:movieToShow.director"></span> </div> Summary The goal of this blog entry was to walkthrough building a simple Single Page App using Durandal and to get a feel for what it is like to use this library. I really like how Durandal stitches together Knockout, Sammy, and RequireJS and establishes patterns for using these libraries to build Single Page Apps. Having a standard pattern which developers on a team can use to build new pages is super valuable. Once you get the hang of it, using Durandal to create new virtual pages is dead simple. Just define a new route, view model, and view and you are done. I also appreciate the fact that Durandal did not attempt to re-invent the wheel and that Durandal leverages existing JavaScript libraries such as Knockout, RequireJS, and Sammy. These existing libraries are powerful libraries and I have already invested a considerable amount of time in learning how to use them. Durandal makes it easier to use these libraries together without losing any of their power. Durandal has some additional interesting features which I have not had a chance to play with yet. For example, you can use the RequireJS optimizer to combine and minify all of a Durandal app’s code. Also, Durandal supports a way to create custom widgets (client-side controls) by composing widgets from a controller and view. You can download the code for the Movies app by clicking the following link (this is a Visual Studio 2012 project): Durandal Movie App

    Read the article

  • IIS7 MVC deploy - 404 not found on actions that accept "id" parameter.

    - by majkinetor
    Hello. Once deployed parts of my web-application stop working. Index-es on each controller do work, and one form posting via Ajax, Login works too. Other then that yields 404. I understand that nothing particular should be done in integrated mode. I don't know how to proceed with troubleshooting. Some info: App is using default app pool set to integrated mode. WebApp is done in net framework 3.5. I use default routing model. Along web.config in root there is web.config in /View folder referencing HttpNotFoundHandler. OS is Windows Server 2008. Admins issued aspnet_regiis.exe -i IIS 7 Any help is appreciated. Thx. EDIT: I determined that only actions that accept ID parameter don't work. On the contrary, when I add dummy id method in Home controller of default MVC app it works. My Views/Web.config <?xml version="1.0"?> <configuration> <system.web> <httpHandlers> <add path="*" verb="*" type="System.Web.HttpNotFoundHandler"/> </httpHandlers> <!-- Enabling request validation in view pages would cause validation to occur after the input has already been processed by the controller. By default MVC performs request validation before a controller processes the input. To change this behavior apply the ValidateInputAttribute to a controller or action. --> <pages validateRequest="false" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <controls> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" /> </controls> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler"/> </handlers> </system.webServer> </configuration>

    Read the article

  • ASP.Net MVC Exception Logging combined with Error Handling

    - by Saajid Ismail
    Hi. I am looking for a simple solution to do Exception Logging combined with Error Handling in my ASP.Net MVC 1.0 application. I've read lots of articles, including Questions posted here on StackOverflow, which all provide varying solutions for different situations. I am still unable to come up with a solution that suits my needs. Here are my requirements: To be able to use the [HandleError] attribute (or something equivalent) on my Controller, to handle all exceptions that could be thrown from any of the Actions or Views. This should handle all exceptions that were not handled specifically on any of the Actions (as described in point 2). I would like to be able to specify which View a user must be redirected to in error cases, for all actions in the Controller. I want to be able to specify the [HandleError] attribute (or something equivalent) at the top of specific Actions to catch specific exceptions and redirect users to a View appropriate to the exception. All other exceptions must still be handled by the [HandleError] attribute on the Controller. In both cases above, I want the exceptions to be logged using log4net (or any other logging library). How do I go about achieving the above? I've read about making all my Controllers inherit from a base controller which overrides the OnException method, and wherein I do my logging. However this will mess around with redirecting users to the appropriate Views, or make it messy. I've read about writing my own Filter Action which implements IExceptionFilter to handle this, but this will conflict with the [HandleError] attribute. So far, my thoughts are that the best solution is to write my own attribute that inherits from HandleErrorAttribute. That way I get all the functionality of [HandleError], and can add my own log4net logging. The solution is as follows: public class HandleErrorsAttribute: HandleErrorAttribute { private log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); public override void OnException(ExceptionContext filterContext) { if (filterContext.Exception != null) { log.Error("Error in Controller", filterContext.Exception); } base.OnException(filterContext); } } Will the above code work for my requirements? If not, what solution does fulfill my requirements?

    Read the article

  • Compile classfile issue in Spring 3

    - by Prajith R
    I have used spring framework 3 for my application. Everything is ok while developed in Netbeans But i need a custom build and done for the same The build created without any issue, but i got the following error The error occurred while calling the following method @RequestMapping(value = "/security/login", method = RequestMethod.POST) public ModelAndView login(@RequestParam String userName, @RequestParam String password, HttpServletRequest request) { ...................... But There is no issue while creating the war with netbeans (I am sure it is about the compilation issue) have you any experiance on this issue ... There is any additional javac argument for compile the same (netbeans used there custom task for the compilation) type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.web.bind.annotation.support.HandlerMethodInvocationException: Failed to invoke handler method [public org.springframework.web.servlet.ModelAndView com.mypackage.security.controller.LoginController.login(java.lang.String,java.lang.String,javax.servlet.http.HttpServletRequest)]; nested exception is java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:659) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) root cause org.springframework.web.bind.annotation.support.HandlerMethodInvocationException: Failed to invoke handler method [public org.springframework.web.servlet.ModelAndView com.mypackage.security.controller.LoginController.login(java.lang.String,java.lang.String,javax.servlet.http.HttpServletRequest)]; nested exception is java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:171) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) root cause java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.bind.annotation.support.HandlerMethodInvoker.getRequiredParameterName(HandlerMethodInvoker.java:618) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveRequestParam(HandlerMethodInvoker.java:417) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:277) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:163) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) note The full stack trace of the root cause is available in the Apache Tomcat/6.0.18 logs.

    Read the article

  • How do I create a Spring 3 + Tiles 2 webapp using REST-ful URLs?

    - by Ichiro Furusato
    I'm having a heck of a time resolving URLs with Spring 3.0 MVC. I'm just building a HelloWorld to try out how to build a RESTful webapp in Spring, nothing theoretically complicated. All of the examples I've been able to find are based on configurations that pay attention to file extensions ("*.htm" or "*.do"), include an artificial directory name prefix ("/foo") or even prefix paths with a dot (ugly), all approaches that use some artificial regex pattern as a signal to the resolver. For a REST approach I want to avoid all that muck and use only the natural URL patterns of my application. I would assume (perhaps incorrectly) that in web.xml I'd set a url-pattern of "/*" and pass everything to the DispatcherServlet for resolution, then just rely on URL patterns in my controller. I can't reliably get my resolver(s) to catch the URL patterns, and in all my trials this results in a resource not found error, a stack overflow (loop), or some kind of opaque Spring 3 ServletException stack trace — one of my ongoing frustrations with Spring generally is that the error messages are not often very helpful. I want to work with a Tiles 2 resolver. I've located my *.jsp files in WEB-INF/views/ and have a single line index.jsp file at the application root redirecting to the index file set by my layout.xml (the Tiles 2 Configurer). I do all the normal Spring 3 high-level configuration: <mvc:annotation-driven /> <mvc:view-controller path="/" view-name="index"/> <context:component-scan base-package="com.acme.web.controller" /> ...followed by all sorts of combinations and configurations of UrlBasedViewResolver, InternalResourceViewResolver, UrlFilenameViewController, etc. with all manner of variantions in my Tiles 2 configuration file. Then in my controller I've trying to pick up my URL patterns. Problem is, I can't reliably even get the resolver(s) to catch the patterns to send to my controller. This has now stretched to multiple days with no real progress on something I thought would be very simple to implement. I'm perhaps trying to do too much at once, though I would think this should be a simple (almost a default) configuration. I'm just trying to create a simple HelloWorld-type application, I wouldn't expect this is rocket science. Rather than me post my own configurations (which have ranged all over the map), does anyone know of an online example that: shows a simple Spring 3 MVC + Tiles 2 web application that uses REST-ful URLs (i.e., avoiding forced URL patterns such as file extensions, added directory names or dots) and relies solely on Spring 3 code/annotations (i.e., nothing outside of Spring MVC itself) to accomplish this? Is there an easy way to do this? Thanks very much for any help.

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Symfony2 Forms: is it possible to bind a form in an "unconventional way"?

    - by DonCallisto
    Imagine this scenario: in our company there is an employee that "play" around graphic,css,html and so on. Our new project will born under symfony2 so we're trying some silly - but "real" - stuff (like authentication from db, submit data from a form and persist it to db and so on..) The problem As far i know, learnt from symfony2 "book" that i found on the site (you can find it here), there is an "automated" way for creating and rendering forms: 1) Build the form up into a controller in this way $form = $this->createFormBuilder($task) ->add('task','text'), ->add('dueDate','date'), ->getForm(); return $this->render('pathToBundle:Controller:templateTwig', array('form'=>$form->createview()); 2) Into templateTwig render the template {{ form_widget(form) }} // or single rows method 3) Into a controller (the same that have a route where you can submit data), take back submitted information if($rquest->getMethod()=='POST'){ $form->bindRequest($request); /* and so on */ } Return to scenario Our graphic employee don't want to access controllers, write php and other stuff like those. So he'll write a twig template with a "unconventional" (from symfony2 point of view, but conventional from HTML point of view) method: /* into twig template */ <form action="{{ path('SestanteUserBundle_homepage') }}" method="post" name="userForm"> <div> USERNAME: <input type="text" name="user_name" value="{{ user.username}}"/> </div> <div> EMAIL: <input type="text" name="user_mail" value="{{ user.email }}"/> </div> <input type="hidden" name="user_id" value="{{ id }}" /> <input type="submit" value="modifica i dati"> </form> Now, if into the controller that handle the submission of data we do something like that public function indexAction(Request $request) { if($request->getMethod() == 'POST'){ // sono arrivato per via di un submit, quindi devo modificare i dati prima di farli vedere a video $defaultData = array('message'=>'ho visto questa cosa in esempio, ma non capisco se posso farne a meno'); $form = $this->createFormBuilder($defaultData) ->add('user_name','text') ->add('user_mail','email') ->add('user_id','integer') ->getForm(); $form->bindRequest($request); //bindo la form ad una request $data = $form->getData(); //mi aspetto un'array chiave=>valore /* .... */ We expected that $data will contain an array with key,value from the submitted form. We found that it isn't true. After googling for a while and try with other "bad" ideas, we're frozen into that. So, if you have a "graphic office" that can't handle directly php code, how can we interface from form(s) to controller(s) ? UPDATE It seems that Symfony2 use a different convention for form's field name and lookup once you've submitted that. In particular, if my form's name is addUser and a field is named userName, the field's name will be AddUser[username] so maybe it have a "dynamic" lookup method that will extract form's name, field's name, concat them and lookup for values. Is it possible?

    Read the article

  • How to set default xrandr settings?

    - by echo-flow
    I'm trying to enable dual monitors in Ubuntu. This is working fine, but every time I do it, desktop effects is disabled. I think I've found the reason why, though: https://wiki.ubuntu.com/X/Config/Multihead/ As with the GNOME XRandR configuration method, setting Virtual to too large a value may result in a loss of hardware acceleration, and thus an inability to use Compiz and its desktop effects. When I use the GNOME monitor applet, or the Monitors configuration in the System menu, the default xrandr settings puts the second monitor to the right of the first, and, as I found with this bug, for most monitors this creates a virtual desktop larger than the maximum 2048 horizontal resolution needed for hardware acceleration on my netbook hardware. So, it seems like if I can modify xrandr's default settings so that it places the new desktop above or below (north or south of) the main LVDS display, then hardware acceleration, and therefore compiz will continue to work. Can anyone tell me, what is the easiest way to achieve this? UPDATE: I have confirmed that multihead support with desktop effects and hardware acceleration works when I move the external monitor display north of the main LVDS display. Right now this involves the following process: plugging in the external monitor, starting the Monitors configuration menu, desktop effects are disabled automatically (and all of the windows on my workspaces are moved to the first workspace), repositioning the external display so that it is north of LVDS display and clicking apply, and then navigating to the Appearance menu and telling it to reenable desktop effects. Is there a simpler way do this? UPDATE 2: OK, so I thought that perhaps the GNOME Monitors configuration screen was trying to be clever, and might be disbling desktop effects. So, I just tried using the xrandr command-line client instead, as follows: xrandr --output VGA1 --above LVDS1 When I do that, desktop effects are still disabled, and I need to manually reenable them. This, despite the fact that hardware acceleration works, and there is never a point where hardware acceleration stops working because the horizontal dimension of the virtual display is too large. So what program is trying to be clever, and is turning off desktop effects when it doesn't need to? And how do I make it stop? If there were a way to re-enable desktop effects from the command line, which I could then put into a script along with the proper xrandr invocation, I would accept that as a workaround. UPDATE 3: OK, here's my script to enable a second monitor with desktop effects. It might be evil, I'm not sure: second-monitor.sh xrandr --output VGA1 --above LVDS1 sleep 3 compiz --replace & The sleep statement might not be necessary. If there's a better way to do this, please let me know. UPDATE 4: This is a Dell Mini Inspiron 1012. Here are my system specifications: lspci -vv 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 29 Region 0: Memory at f0b00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at 18d0 [size=8] Region 2: Memory at d0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at f0900000 (32-bit, non-prefetchable) [size=1M] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at f0b80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> lsmod | grep i915 i915 287458 2 drm_kms_helper 29329 1 i915 drm 162409 3 i915,drm_kms_helper intel_agp 24375 2 i915 i2c_algo_bit 5028 1 i915 video 17375 1 i915

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • How to make Ubuntu recognize an unknown external display (so I can adjust its resolution)?

    - by WagnerAA
    I have a Dell laptop with an external monitor attached (a Samsumg SyncMaster 931c). My laptop display was recognized, and I can adjust its optimum resolution. My external display is still unknown, thus I'm stuck at a lower resolution (1024x768): I tried the "Detect Displays" button, but it didn't work, nothing happens. I recently upgraded from Ubuntu 12.04 to 12.10. Things were working before. I don't know if I can actually change this configuration, or if this is a bug. I searched for an answer here and also in Launchpad's website, but found none. I even tried to install Nvidia drivers, and just messed things up. It seems I wasn't even using nvidia before, as I guessed by looking at my additional drivers configuration: My laptop has an Intel chipset, I guess: $ dpkg --get-selections | grep -i -e nvidia -e intel intel-gpu-tools install libdrm-intel1:amd64 install libdrm-intel1:i386 install nvidia-common install xserver-xorg-video-intel install I don't have an xorg.conf file (I think this is nvidia related, am I right?): $ cat /etc/X11/xorg.conf cat: /etc/X11/xorg.conf: No such file or directory $ ls -l /etc/X11/ total 76 drwxr-xr-x 2 root root 4096 Out 19 23:41 app-defaults drwxr-xr-x 2 root root 4096 Abr 25 2012 cursors -rw-r--r-- 1 root root 18 Abr 25 2012 default-display-manager drwxr-xr-x 4 root root 4096 Abr 25 2012 fonts -rw-r--r-- 1 root root 17394 Dez 3 2009 rgb.txt lrwxrwxrwx 1 root root 13 Mai 1 03:33 X -> /usr/bin/Xorg drwxr-xr-x 3 root root 4096 Out 19 23:41 xinit drwxr-xr-x 2 root root 4096 Jan 23 2012 xkb -rw-r--r-- 1 root root 0 Out 24 08:55 xorg.conf.nvidia-xconfig-original -rwxr-xr-x 1 root root 709 Abr 1 2010 Xreset drwxr-xr-x 2 root root 4096 Out 19 10:08 Xreset.d drwxr-xr-x 2 root root 4096 Out 19 10:08 Xresources -rwxr-xr-x 1 root root 3730 Jan 20 2012 Xsession drwxr-xr-x 2 root root 4096 Out 20 00:11 Xsession.d -rw-r--r-- 1 root root 265 Jul 1 2008 Xsession.options -rw-r--r-- 1 root root 13 Ago 15 06:43 XvMCConfig -rw-r--r-- 1 root root 601 Abr 25 2012 Xwrapper.config Here is some information I gathered by looking at other related posts: $ sudo lshw -C display; lsb_release -a; uname -a *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:48 memory:f6800000-f6bfffff memory:d0000000-dfffffff ioport:1800(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6100000-f61fffff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:cxx-3.0-amd64:cxx-3.0-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:cxx-4.0-amd64:cxx-4.0-noarch:desktop-3.1-amd64:desktop-3.1-noarch:desktop-3.2-amd64:desktop-3.2-noarch:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.0-amd64:graphics-3.0-noarch:graphics-3.1-amd64:graphics-3.1-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-3.2-amd64:printing-3.2-noarch:printing-4.0-amd64:printing-4.0-noarch:qt4-3.1-amd64:qt4-3.1-noarch Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal Linux Batcave 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ xrandr -q Screen 0: minimum 320 x 200, current 2304 x 800, maximum 32767 x 32767 LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 286mm x 1790mm 1280x800 59.9*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+1280+32 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) If there's anything else I can do, any other information I can post here, to help me configure this external display, please let me know. If this is actually a bug, I apologize (I know bugs are not allowed here), but I really wasn't sure. And I will promptly file a bug report in Launchpad if that's the case. Thanks a lot in advance. ;)

    Read the article

  • If the model is validating the data, shouldn't it throw exceptions on bad input?

    - by Carlos Campderrós
    Reading this SO question it seems that throwing exceptions for validating user input is frowned upon. But who should validate this data? In my applications, all validations are done in the business layer, because only the class itself really knows which values are valid for each one of its properties. If I were to copy the rules for validating a property to the controller, it is possible that the validation rules change and now there are two places where the modification should be made. Is my premise that validation should be done on the business layer wrong? What I do So my code usually ends up like this: <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if (mb_strlen($n) == 0) { throw new ValidationException("Name cannot be empty"); } $this->name = $n; } public function setAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { throw new ValidationException("Age $a is not valid"); } $a = (int)$a; } if ($a < 0 || $a > 150) { throw new ValidationException("Age $a is out of bounds"); } $this->age = $a; } // other getters, setters and methods } In the controller, I just pass the input data to the model, and catch thrown exceptions to show the error(s) to the user: <?php $person = new Person(); $errors = array(); // global try for all exceptions other than ValidationException try { // validation and process (if everything ok) try { $person->setAge($_POST['age']); } catch (ValidationException $e) { $errors['age'] = $e->getMessage(); } try { $person->setName($_POST['name']); } catch (ValidationException $e) { $errors['name'] = $e->getMessage(); } ... } catch (Exception $e) { // log the error, send 500 internal server error to the client // and finish the request } if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } Is this a bad methodology? Alternate method Should maybe I create methods for isValidAge($a) that return true/false and then call them from the controller? <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if ($this->isValidName($n)) { $this->name = $n; } else { throw new Exception("Invalid name"); } } public function setAge($a) { if ($this->isValidAge($a)) { $this->age = $a; } else { throw new Exception("Invalid age"); } } public function isValidName($n) { $n = trim($n); if (mb_strlen($n) == 0) { return false; } return true; } public function isValidAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { return false; } $a = (int)$a; } if ($a < 0 || $a > 150) { return false; } return true; } // other getters, setters and methods } And the controller will be basically the same, just instead of try/catch there are now if/else: <?php $person = new Person(); $errors = array(); if ($person->isValidAge($age)) { $person->setAge($age); } catch (Exception $e) { $errors['age'] = "Invalid age"; } if ($person->isValidName($name)) { $person->setName($name); } catch (Exception $e) { $errors['name'] = "Invalid name"; } ... if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } So, what should I do? I'm pretty happy with my original method, and my colleagues to whom I have showed it in general have liked it. Despite this, should I change to the alternate method? Or am I doing this terribly wrong and I should look for another way?

    Read the article

  • MVC Html.ActionLink not passing querystring properly

    - by Dave Hanna
    This seems like it should be pretty straight forward, but I'm apparently confused. I have a List view that displays a paged list. At the bottom I have a set of actionlinks: <%= Html.ActionLink("First Page", "List", new { page = 1} ) %> &nbsp; <%= Html.ActionLink("Prev Page", "List", new { page = Model.PageNumber - 1 }) %> &nbsp; <%= Html.ActionLink("Next Page", "List", new { page = Model.PageNumber + 1 }) %> &nbsp; <%= Html.ActionLink("Last Page", "List", new { page = Model.LastPage } )%> I'm using the basic default routes setup, except with "List" substituted for "Index": routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "List", id = UrlParameter.Optional } // Parameter defaults ); The problem is that the ActionLink helpers are generating links of the form: http://localhost:2083/Retrofit?page=2 rather than http://localhost:2083/Retrofit/?page=2 (with the trailing slash after the controller name & before the query string). When the first URL is routed, it completely loses the query string - if I look at Request.QueryString by the time it gets to the controller, it's null. If I enter the second URL (with the trailing slash), it comes in properly (i.e., QueryString of "page=2"). So how can I either get the ActionLink helper to generate the right URL, or get the Routing to properly parse what ActionLink is generating? Thanks.

    Read the article

  • iPhone how to enable or disable UITabBar

    - by Dean Wagstaff
    I have a simple app with a tab bar which based upon user input disables one or more of the bar items. I understand I need to use a UITabBarDelegate which I have tried to use. However when I call the delegate method I get an uncaught exception error [NSObject doesNotRecognizeSelector]. I am not sure I am doing this all right or that I haven't missed something. Any suggestions. What I have now is the following: WMViewController.h import define kHundreds 0 @interface WMViewController : UIViewController { } @end WMViewController.m import "WMViewController.h" import "MLDTabBarControllerAppDelegate.h" @implementation WMViewController (IBAction)finishWizard{ MLDTabBarControllerAppDelegate *appDelegate = (MLDTabBarControllerAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate setAvailabilityTabIndex:0 Enable:TRUE]; } MLDTabBarControllerAppDelegate.h import @interface MLDTabBarControllerAppDelegate : NSObject { } (void) setAvailabilityTabIndex: (NSInteger) index Enable: (BOOL) enable; @end MLDTabBarControllerAppDelegate.m import "MLDTabBarControllerApplicationDelegate.h" import "MyListDietAppDelegate.h" @implementation MLDTabBarControllerAppDelegate (void) setAvailabilityTabIndex: (NSInteger) index Enable: (BOOL) enable { UITabBarController *controller = (UITabBarController *)[[[MyOrganizerAppDelegate getTabBarController] viewControllers ] objectAtIndex:index]; [[controller tabBarItem] setEnabled:enable]; } @end I get what appear to be a good controller object but crash on the [[controller tabBarItem]setEnabled:enable]; What am I missing... Any suggestions Thanks,

    Read the article

  • UIScrollView with pages enabled and device rotation/orientation changes (MADNESS)

    - by jbrennan
    I'm having a hard time getting this right. I've got a UIScrollView, with paging enabled. It is managed by a view controller (MainViewController) and each page is managed by a PageViewController, its view added as a subview of the scrollView at the proper offset. Scrolling is left-right, for standard orientation iPhone app. Works well. Basically exactly like the sample provided by Apple and also like the Weather app provided with the iPhone. However, when I try to support other orientations, things don't work very well. I've supported every orientation in both MainViewController and PageViewController with this method: - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } However, when I rotate the device, my pages become quite skewed, and there are lots of drawing glitches, especially if only some of the pages have been loaded, then I rotate, then scroll more, etc... Very messy. I've told my views to support auto-resizing with theView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; But to no avail. It seems to just stretch and distort my views. In my MainViewController, I added this line in an attempt to resize all my pages' views: - (void)didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation { self.scrollView.contentSize = CGSizeMake(self.scrollView.frame.size.width * ([self.viewControllers count]), self.scrollView.frame.size.height); for (int i = 0; i < [self.viewControllers count]; i++) { PageViewController *controller = [self.viewControllers objectAtIndex:i]; if ((NSNull *)controller == [NSNull null]) continue; NSLog(@"Changing frame: %d", i); CGRect frame = self.scrollView.frame; frame.origin.x = frame.size.width * i; frame.origin.y = 0; controller.view.frame = frame; } } But it didn't help too much (because I lazily load the views, so not all of them are necessarily loaded when this executes). Is there any way to solve this problem?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >