Search Results

Search found 43124 results on 1725 pages for 'android custom view'.

Page 694/1725 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • How can I "filter" postfix-generated bounce messages?

    - by Flimzy
    We are using postfix 2.7 and custom SMTPD (based on qpsmtpd) in highly customized configuration for spam filtering. We have a new requirement to filter postfix-generated bounces through our custom qpsmtpd process (not so much for content filtering, but to process these bounces accordingly). Our current configuration looks (in part) like this: main.cf (only customizations shown): 2526 inet n - - - 0 cleanup pickup fifo n - - 60 1 pickup -o content_filter=smtp:127.0.0.2 Our smtpd injects messages to postfix on port 2526, by speaking directly to the cleanup daemon. And the custom pickup command instructs postfix to hand off all locally-generated mail (from cron, nagios, or other custom scripts) to our custom smtpd. The problem is that this configuration does not affect postfix generated bounce messages, since they do not go through the pickup daemon. I have tried adding the same content_filter option to the bounce daemon commands, but it does not seem to have any effect: bounce unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 defer unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 trace unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 For reference, here is my main.cf file, as well: biff = no # TLS parameters smtpd_tls_loglevel = 0 smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache smtp_tls_security_level = may mydestination = $myhostname alias_maps = proxy:pgsql:/etc/postfix/dc-aliases.cf transport_maps = proxy:pgsql:/etc/postfix/dc-transport.cf # This is enforced on incoming mail by QPSMTPD, so this is simply # the upper possible bound (also enforced in defaults.pl) message_size_limit = 262144000 mailbox_size_limit = 0 # We do our own message expiration, but if we set this to 0, then postfix # will try each mail delivery only once, so instead we set it to 100 days # (which is the max postfix seems to support) maximal_queue_lifetime = 100d hash_queue_depth = 1 hash_queue_names = deferred, defer, hold I also tried adding the internal_mail_filter_classes option to main.cf, but also tono affect: internal_mail_filter_classes = bounce,notify I am open to any suggestions, including handling our current content-filtering-loop in a different way. If it's not clear what I'm asking, please let me know, and I can try to clarify.

    Read the article

  • Iterating over resources in puppet templates

    - by daveg
    So I've got a puppet manifest, with multiple resources class foo { Custom::Resource {'resource1': attr1 => 'val1', attr2 => 'val2', } Custom::Resource {'resource2': attr1 => 'val3', attr2 => 'val4', } Custom::Resource {'resource3': attr1 => 'val5', attr2 => 'val6', } } If I wanted to loop over the Custom::Resource resource names in an .erb template that are defined in class foo, how do I access them? So if I wanted to write out a template that looked like this: ThisLine = resource1 ThisLine = resource2 ThisLine = resource3

    Read the article

  • Windows Desktop Switchboard / Toolbar

    - by codex73
    I'm researching a way to provide a somewhat custom toolbox, widget or switchboard which will reside on users desktop visible at most times for easy access to resources. Resources could be websites, local computer applications, custom api, etc. Compatibility should be Windows XP, 2008 Server, 7, Vista. Unsure if I should custom build an application with Visual Studio, Customize a Open Source Package or what could be a good simple implementation. Any advice or comments on this will be greatly appreciated.

    Read the article

  • Achieving A 1600 x 900 Resolution For (Guest OS) Ubuntu Under (Host OS) Windows 7

    - by panamack
    I am running Ubuntu 11.10 as a guest OS using VirtualBox 4.1.16 installed on Windows 7 Ultimate. On my laptop I'd like to be able to run Ubuntu in full screen mode at 1600 x 900. I only have options within the virtual machine to select 4:3 display settings such as 1600 x 1200, 1440 x 1050 etc. I have guest additions installed. At the windows command prompt, I tried typing: VBoxManage setextradata "Virtual Ubuntu Coursera ESSAAS" "CustomVideoMode1" "1600x900x16" This didn't work, still no 1600 x 900 res available in Ubuntu. I tried this having read the following section of the VirtualBox help (this also says something about a 'video mode hint feature' not sure what this means): 9.7. Advanced display configuration 9.7.1. Custom VESA resolutions Apart from the standard VESA resolutions, the VirtualBox VESA BIOS allows you to add up to 16 custom video modes which will be reported to the guest operating system. When using Windows guests with the VirtualBox Guest Additions, a custom graphics driver will be used instead of the fallback VESA solution so this information does not apply. Additional video modes can be configured for each VM using the extra data facility. The extra data key is called CustomVideoMode with x being a number from 1 to 16. Please note that modes will be read from 1 until either the following number is not defined or 16 is reached. The following example adds a video mode that corresponds to the native display resolution of many notebook computers: VBoxManage setextradata "VM name" "CustomVideoMode1" "1400x1050x16" The VESA mode IDs for custom video modes start at 0x160. In order to use the above defined custom video mode, the following command line has be supplied to Linux: vga = 0x200 | 0x160 vga = 864 For guest operating systems with VirtualBox Guest Additions, a custom video mode can be set using the video mode hint feature. UPDATE 02.06.12 I've just tried creating a new virtual machine using the same original disk image I had been given. This had Guest Additions v 4.1.6 installed and provided me with the 1600 x 900 full screen display I want. It's after I then install Guest Additions v 4.1.16 (the version included with my VirtualBox installation) that my only choices are 4:3 displays e.g. 1600 x 1200. Seems this is the cause.

    Read the article

  • direct url to server ip address and port

    - by AM0
    We have a Windows 2012 dedicated server. There’s a custom service running on port xxxxx which accepts connections from our custom built hardware devices over TCP/IP port. As of now we use servername.serverdomain.com:xxxxx to connect to the service and start communication. However, we prefer to use URL instead of server’s name or IP Address. So we got a custom url and set its name servers to point to dedicated server. However, just setting DNS doesn’t seem to be working. Could someone please guide as to how to get it working? UPDATE In short I want www.custom-url.com being forwarded to servername.serverdomain.com:xxxxx. These requests are coming from hardware and not browser.

    Read the article

  • ASP.NET. MVC2. Entity Framework. Cannot pass primary key value back from view to [HttpPost]

    - by Paul Connolly
    I pass a ViewModel (which contains a "Person" object) from the "EditPerson" controller action into the view. When posted back from the view, the ActionResult receives all of the Person properties except the ID (which it says is zero instead of say its real integer) Can anyone tell me why? The controllers look like this: public ActionResult EditPerson(int personID) { var personToEdit = repository.GetPerson(personID); FormationViewModel vm = new FormationViewModel(); vm.Person = personToEdit; return View(vm); } [HttpPost] public ActionResult EditPerson(FormationViewModel model) <<Passes in all properties except ID { // Persistence code } The View looks like this: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Afp.Models.Formation.FormationViewModel>" %> <% using (Html.BeginForm()) {% <%= Html.ValidationSummary(true) % <fieldset> <legend>Fields</legend> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Title) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Title) %> <%= Html.ValidationMessageFor(model => model.Person.Title) %> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Forename)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Forename)%> <%= Html.ValidationMessageFor(model => model.Person.Forename)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Surname)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Surname)%> <%= Html.ValidationMessageFor(model => model.Person.Surname)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.DOB) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.DOB, String.Format("{0:g}", Model.DOB)) <%= Html.ValidationMessageFor(model => model.DOB) %> </div>--%> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Nationality)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Nationality)%> <%= Html.ValidationMessageFor(model => model.Person.Nationality)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Occupation)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Occupation)%> <%= Html.ValidationMessageFor(model => model.Person.Occupation)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.CountryOfResidence)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.CountryOfResidence)%> <%= Html.ValidationMessageFor(model => model.Person.CountryOfResidence)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.PreviousNameForename)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.PreviousNameForename)%> <%= Html.ValidationMessageFor(model => model.Person.PreviousNameForename)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.PreviousSurname)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.PreviousSurname)%> <%= Html.ValidationMessageFor(model => model.Person.PreviousSurname)%> </div> <div class="editor-label"> <%= Html.LabelFor(model => model.Person.Email)%> </div> <div class="editor-field"> <%= Html.TextBoxFor(model => model.Person.Email)%> <%= Html.ValidationMessageFor(model => model.Person.Email)%> </div> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } % And the Person class looks like: [MetadataType(typeof(Person_Validation))] public partial class Person { public Person() { } } [Bind(Exclude = "ID")] public class Person_Validation { public int ID { get; private set; } public string Title { get; set; } public string Forename { get; set; } public string Surname { get; set; } public System.DateTime DOB { get; set; } public string Nationality { get; set; } public string Occupation { get; set; } public string CountryOfResidence { get; set; } public string PreviousNameForename { get; set; } public string PreviousSurname { get; set; } public string Email { get; set; } } And ViewModel: public class FormationViewModel { public Company Company { get; set; } public Address RegisteredAddress { get; set; } public Person Person { get; set; } public PersonType PersonType { get; set; } public int CurrentStep { get; set; } } }

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Formatting Dates, Times and Numbers in ASP.NET

    Formatting is the process of converting a variable from its native type into a string representation. Anytime you display a DateTime or numeric variables in an ASP.NET page, you are formatting that variable from its native type into some sort of string representation. How a DateTime or numeric variable is formatted depends on the culture settings and the format string. Because dates and numeric values are formatted differently across cultures, the .NET Framework bases its formatting on the specified culture settings. By default, the formatting routines use the culture settings defined on the web server, but you can indicate that a particular culture be used anytime you format. In addition to the culture settings, formatting is also affected by a format string, which spells out the formatting details to apply. The .NET Framework contains a bounty of format strings. There are standard format strings, which are typically a single letter that applies detailed formatting logic. For example, the "C" format specifier will format a numeric type as a currency value; the "Y" format specifier displays the month name and four-digit year of the specified DateTime value. There are also custom format strings, which display a apply a very specific formatting rule. These custom format strings can be put together to build more intricate formats. For instance, the format string "dddd, MMMM d" displays the full day of the week name followed by a comma followed by the full name of the month followed by the day of the month. For more involved formatting scenarios, where neither the standard or custom format strings cut the mustard, you can always create your own formatting extension methods. This article explores the standard format strings for dates, times and numbers and includes a number of custom formatting methods I've created and use in my own projects. There's also a demo application you can download that lets you specify a culture and then shows you the output for the standard format strings for the selected culture. Read on to learn more! Read More >

    Read the article

  • New TFS Template Available - "Agile Dev in a Waterfall Environment"–GovDev

    - by Hosam Kamel
      Microsoft Team Foundation Server (TFS) 2010 is the collaboration platform at the core of Microsoft’s application lifecycle management solution. In addition to core features like source control, build automation and work-item tracking, TFS enables teams to align projects with industry processes such as Agile, Scrum and CMMi via the use of customable XML Process Templates. Since 2005, TFS has been a welcomed addition to the Microsoft developer tool line-up by Government Agencies of all sizes and missions. However, many government development teams consistently struggle with leveraging an iterative development process all while providing the structure, visibility and status reporting that is required by many Government, waterfall-centric, project methodologies. GovDev is an open source, TFS Process Template that combines the formality of CMMi/Waterfall with the flexibility of Agile/Iterative: The GovDev for TFS Accelerator also implements two new custom reports to support the customized process and provide the real-time visibility across the lifecycle with full traceability and drill down to tasks, tests and code: The TFS Accelerator contains: A custom TFS process template that implements a requirements centric, yet iterative process with extreme traceability throughout the lifecycle. A custom “Requirements Traceability Report” that provides a single view of traceability for the project.   Within the Traceability Report, you can also view live status indicators and “click-through” to the individual assets (even changesets). A custom report that focuses on “Contributions by Team Member” tracking things like “number of check-ins” and “Net lines added”.  Fully integrated documentation on the entire process and features. For a 45min demo of GovDev, visit: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032508359&culture=en-us Download it from Codeplex here.     Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • MySQL - Installation

    - by Stuart Brierley
    In order to create a development environment for a project I am working on, I recently needed to install MySQL Server.  The first step was to download the msi. Running this presents you with the installer splash screen, detailing the version of MySQL that you are about to install - in this case MySQL Server 5.1. Next you can choose whether to install a Typical, Complete or Custom installation.  Although this is the first time I have installed MySQL and the Custom option states "Recommended for advanced users" I opted to carry out a Custom installation, specifically so that I could be sure of what features and components were installed. On the Custom Setup screen you can choose which components to install.  By default the Developer Components are not included, but I opted to include some of these elements. Next up is the ready to install screen and then the intsallation progress.   Following the completion of the installation you are shown a few screens with details of the MySQL Enterprise subscription option. Finally the installation is complete and you are offered the chance to configure and register MySQL. Next I will be looking at the configuration of MySQL Server 5.1.

    Read the article

  • Need for gksudo for "Install new software" in eclipse

    - by Captain Giraffe
    I have been developing for android on eclipse for a while now, and my experience with the eclipse environment on Ubuntu10.10 has not been a smooth one. With the repo install of eclipse I have had to sudo eclipse to install the required components for android development. (a big red flag for me) I tried today to install updates for the eclipse and android platform and my eclipse installation seems to have broken horribly. I can no longer find and of the urls for new software if i gksudo it, if I run it in user mode it fails (as it always has) with permissions problems. I have chowned user:user all my eclipse and android related private/user files. This is a system running ubuntu 10.10 with gnome2.x. On my kubuntu 11.10 install it work a lot better. Is there an easy fix to this? Is the repo version of eclipse broken? Should I do a clean install for just my user? (if so can I retain my previously installed software? the installation process is very time consuming) I saw there was a previous post here recommending this for new installations.

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Sharing authentication methods across API and web app

    - by Snixtor
    I'm wanting to share an authentication implementation across a web application, and web API. The web application will be ASP.NET (mostly MVC 4), the API will be mostly ASP.NET WEB API, though I anticipate it will also have a few custom modules or handlers. I want to: Share as much authentication implementation between the app and API as possible. Have the web application behave like forms authentication (attractive log-in page, logout option, redirect to / from login page when a request requires authentication / authorisation). Have API callers use something closer to standard HTTP (401 - Unauthorized, not 302 - Redirect). Provide client and server side logout mechanisms that don't require a change of password (so HTTP basic is out, since clients typically cache their credentials). The way I'm thinking of implementing this is using plain old ASP.NET forms authentication for the web application, and pushing another module into the stack (much like MADAM - Mixed Authentication Disposition ASP.NET Module). This module will look for some HTTP header (implementation specific) which indicates "caller is API". If the header "caller is API" is set, then the service will respond differently than standard ASP.NET forms authentication, it will: 401 instead of 302 on a request lacking authentication. Look for username + pass in a custom "Login" HTTP header, and return a FormsAuthentication ticket in a custom "FormsAuth" header. Look for FormsAuthentication ticket in a custom "FormsAuth" header. My question(s) are: Is there a framework for ASP.NET that already covers this scenario? Are there any glaring holes in this proposed implementation? My primary fear is a security risk that I can't see, but I'm similarly concerned that there may be something about such an implementation that will make it overly restrictive or clumsy to work with.

    Read the article

  • Get Smarter Just By Listening

    - by mark.wilcox
    Occasionally my friends ask me what do I listen/read to keep informed. So I thought I would like to post an update. First - there is an entirely new network being launched by Jason Calacanis called "ThisWeekIn". They have weekly shows on variety of topics including Startups, Android, Twitter, Cloud Computing, Venture Capital and now the iPad. If you want to keep ahead (and really get motivated) - I totally recommend listening to at least This Week in Startups. I also find Cloud Computing helpful. I also like listening to the Android show so that I can see how it's progressing. Because while I love my iPhone/iPad - it's  important to keep the competition in the game up to improve everything. I'm also not opposed to switching to Android if something becomes as nice experience - but so far - my take on Android devices are  - 10 years ago, I would have jumped all over them because of their hackability. But now, I'm in a phase, where I just want these devices to work and most of my creation is in non-programming areas - I find the i* experience better. Second - In terms of general entertaining tech news - I'm a big fan of This Week in Tech. Finally - For a non-geek but very informative show - The Kevin Pollack Show on ThisWeekIn network gets my highest rating. It's basically two-hours of in-depth interview with a wide variety of well-known comedian and movie stars. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Opportunity Nokia's

    - by Andrew Clarke
    Nokia’s alliance with Microsoft is likely to be good news for anyone using Microsoft technologies, and particularly for .NET developers. Before the announcement, the future wasn’t looking so bright for the ‘mobile’ version of Windows, Windows Phone. Microsoft currently has only 3.1% of the Smartphone market, even though it has been involved in it for longer than its main rivals. Windows Phone has now got the basics right, but that is hardly sufficient by itself to change its predicament significantly. With Nokia's help, it is possible. Despite the promise of multi-tasking for third party apps, integration with Microsoft platforms such as Xbox and Office, direct integration of Twitter support, and the introduction of IE 9 “later this year”, there have been frustratingly few signs of urgency on Microsoft’s part in improving the Windows Phone  product. Until this happens, there seems little prospect of reward for third-party developers brave enough to support the platform with applications. This is puzzling when one sees how well SQL Server and Microsoft’s other server technologies have thrived in recent years, under good leadership from a management that understands the technology. The same just hasn’t been true for some of the consumer products. In consequence, iPads and Android tablets have already exposed diehard Windows users, for the first time, to an alternative GUI for consumer Tablet PCs, and the comparisons aren’t always in Windows’ favour. Nokia’s problem is obvious: Android’s meteoric rise. Android now has 33% of the worldwide market for smartphones, while the market share of Nokia’s Symbian has dropped from 44% to 31%. As details of the agreement emerge, it would seem that Nokia will bring a great deal of expertise, such as imaging and Nokia Maps, to Windows Phone that should make it more competitive. It is wrong to assume that Nokia’s decline will continue: the shock of Android’s sudden rise could be enough to sting them back to their previous form, and they have Microsoft’s huge resources and marketing clout to help them. For the sake of the whole Windows stack, I really hope the alliance succeeds.

    Read the article

  • XNA CustomModelAnimationSample problem

    - by Mentoliptus
    I downloaded the official tutorial from:CustomModelAnimationSample It works fine but when I try to replicate it in my project, it fails to load the Tag property in my model. Is found that the probelm is in the line: skinnedModel = Content.Load<Model>("DudeWalk"); This line loads the model from the DudeWalk.fbx file and with the custom SkinnedModelProcessor. It loads the animations data in the model. After the line the Tag property is full. I stepped into the method and it went to the custom ModelData class. I copied everything from the projects CustomModelAnimationWindows and CustomModelAnimationPipeline to my solution and set all the references. I tried the same line of code and couldn't step in the method. It called the default method or model constructor and after the line the model's Tag propetry was null. I have to load the model through my custom SkinnedModelProcessor class, but how I tell the game to use this class? In the tutroail CustomModelClass the line is changed to: model = Content.Load<CustomModel>("tank"); So I assumed that I have to set the generic type to a custom model class, but the first example works without it. If anyone has some useful advice or some other helpful link, I'll be happy to try it.

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • Google Top Geek E07

    Google Top Geek E07 In Spanish! Noticias: 1. Gráfico de conocimiento ahora en español y varios idiomas más. Totalmente localizado. 2. Nueva versión de Snapseed para iOS y Android. Gmail para Android y la versión 2.0 para iOS. Nuevo estilo para YouTube. 3. 500Millones de usuarios en Google+ y una nueva característica: comunidades. Las búsquedas de la semana y lo más visto en YouTube. Recomendamos Picket, una app para Android que funciona en México y te da la cartelera en cines. Noticias para desarrolladores: 1. Mejores mapas para apps de Android, nuevo API. 2. Una imagen dice más que mil palabras: Place Photos y Radar Search Ligas y más información en el blog: programa-con-google.blogspot.com From: GoogleDevelopers Views: 80 11 ratings Time: 18:09 More in Science & Technology

    Read the article

  • How to embed woff fonts for iframe source pages?

    - by Mon
    I am trying to make and embed a slideshow (of text, photo, audio, video, etc) in my site (HTML5) by loading webpages consecutively inside an iframe embedded in my first page. Most or all of the frame source pages, i.e. pages loaded inside first iframe are mine, but located in many different places. All of these pages are in an Indic language. Although I can use UTF-8 charset and lang="" declaration and that's probably enough functionally, but I also want to embed my preferred Indic unicode font in WOFF format via CSS3 @font-face rule , so the size and look of the text is uniform and the way I want it - throughout the slideshow. Problem is, there are many many pages in the slideshow all located in various places with many more linked pages, and it is next to impossible, or at least would be extremely tedious, to embed my custom WOFF font in every single page (which will also require separate css and uploading of fonts in every single instance). Besides, this may make the slideshow very heavy, sluggish and cumbersome for the user, since it will have to load the custom Indic font again and again everytime a new page is loaded in the iframe. I am not sure about this though. Is that how it works? I ask this, because I noticed that when I embedded my custom WOFF font in the 'first' page, it did not have any effect on the pages loaded inside the iframe. If I embed the font in some of the pages in the iframe, the next pages still don't get my font. Is there a way to embed my custom WOFF font only once, preferably in the first page where the first iframe is, and pass its effect on to all the pages embedded / loaded through the iframe and make their text show up as per my initially embedded woff font - without embedding my font in every single of them? Please help!

    Read the article

  • How can I do vertical paging with UITableView?

    - by vodkhang
    Let me describe the situation I am trying to do: I have a list of Items. When I go into ItemDetailView, which is a UITableView. I want to do paging here like: When I scroll down out of the table view, I want to display the next item. Like in Good Reader: Currently, I am trying 2 approaches but both do not really work for me. 1st: I let my UITableView over my scroll view and I have a nice animation and works quite ok except sometimes, the UITableView will receive event instead of my scroll view. And because, a UITableView is already a UITableView, this approach seems not be a good idea. Here is some code: - (void)applicationDidFinishLaunching:(UIApplication *)application { // a page is the width of the scroll view scrollView.pagingEnabled = YES; scrollView.contentSize = CGSizeMake(scrollView.frame.size.width, scrollView.frame.size.height * kNumberOfPages); scrollView.showsHorizontalScrollIndicator = NO; scrollView.showsVerticalScrollIndicator = YES; scrollView.scrollsToTop = NO; scrollView.delegate = self; [self loadScrollViewWithPage:0]; } - (void)loadScrollViewWithPage:(int)page { if (page < 0) return; if (page >= kNumberOfPages) return; [self.currentViewController.view removeFromSuperview]; self.currentViewController = [[[MyNewTableView alloc]initWithPageNumber:page] autorelease]; if (nil == currentViewController.view.superview) { CGRect frame = scrollView.frame; [scrollView addSubview:currentViewController.view]; } } - (void)scrollViewDidScroll:(UIScrollView *)sender { if (pageControlUsed) { return; } CGFloat pageHeight = scrollView.frame.size.height; int page = floor((scrollView.contentOffset.y - pageHeight / 2) / pageHeight) + 1; [self loadScrollViewWithPage:page]; } - (IBAction)changePage:(id)sender { int page = pageControl.currentPage; [self loadScrollViewWithPage:page]; // update the scroll view to the appropriate page CGRect frame = scrollView.frame; [scrollView scrollRectToVisible:frame animated:YES]; pageControlUsed = YES; } My second approach is: using pop and push of navigationController to pop the current ItemDetail and push a new one on top of it. But this approach will not give me a nice animation of scrolling down like the first approach. So, the answer to how I can get it done with either approach will be appreciated

    Read the article

  • Combining MVVM Light Toolkit and Unity 2.0

    - by Alan Cordner
    This is more of a commentary than a question, though feedback would be nice. I have been tasked to create the user interface for a new project we are doing. We want to use WPF and I wanted to learn all of the modern UI design techniques available. Since I am fairly new to WPF I have been researching what is available. I think I have pretty much settled on using MVVM Light Toolkit (mainly because of its "Blendability" and the EventToCommand behavior!), but I wanted to incorporate IoC also. So, here is what I have come up with. I have modified the default ViewModelLocator class in a MVVM Light project to use a UnityContainer to handle dependency injections. Considering I didn't know what 90% of these terms meant 3 months ago, I think I'm on the right track. // Example of MVVM Light Toolkit ViewModelLocator class that implements Microsoft // Unity 2.0 Inversion of Control container to resolve ViewModel dependencies. using Microsoft.Practices.Unity; namespace MVVMLightUnityExample { public class ViewModelLocator { public static UnityContainer Container { get; set; } #region Constructors static ViewModelLocator() { if (Container == null) { Container = new UnityContainer(); // register all dependencies required by view models Container .RegisterType<IDialogService, ModalDialogService>(new ContainerControlledLifetimeManager()) .RegisterType<ILoggerService, LogFileService>(new ContainerControlledLifetimeManager()) ; } } /// <summary> /// Initializes a new instance of the ViewModelLocator class. /// </summary> public ViewModelLocator() { ////if (ViewModelBase.IsInDesignModeStatic) ////{ //// // Create design time view models ////} ////else ////{ //// // Create run time view models ////} CreateMain(); } #endregion #region MainViewModel private static MainViewModel _main; /// <summary> /// Gets the Main property. /// </summary> public static MainViewModel MainStatic { get { if (_main == null) { CreateMain(); } return _main; } } /// <summary> /// Gets the Main property. /// </summary> [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic", Justification = "This non-static member is needed for data binding purposes.")] public MainViewModel Main { get { return MainStatic; } } /// <summary> /// Provides a deterministic way to delete the Main property. /// </summary> public static void ClearMain() { _main.Cleanup(); _main = null; } /// <summary> /// Provides a deterministic way to create the Main property. /// </summary> public static void CreateMain() { if (_main == null) { // allow Unity to resolve the view model and hold onto reference _main = Container.Resolve<MainViewModel>(); } } #endregion #region OrderViewModel // property to hold the order number (injected into OrderViewModel() constructor when resolved) public static string OrderToView { get; set; } /// <summary> /// Gets the OrderViewModel property. /// </summary> public static OrderViewModel OrderViewModelStatic { get { // allow Unity to resolve the view model // do not keep local reference to the instance resolved because we need a new instance // each time - the corresponding View is a UserControl that can be used multiple times // within a single window/view // pass current value of OrderToView parameter to constructor! return Container.Resolve<OrderViewModel>(new ParameterOverride("orderNumber", OrderToView)); } } /// <summary> /// Gets the OrderViewModel property. /// </summary> [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance", "CA1822:MarkMembersAsStatic", Justification = "This non-static member is needed for data binding purposes.")] public OrderViewModel Order { get { return OrderViewModelStatic; } } #endregion /// <summary> /// Cleans up all the resources. /// </summary> public static void Cleanup() { ClearMain(); Container = null; } } } And the MainViewModel class showing dependency injection usage: using GalaSoft.MvvmLight; using Microsoft.Practices.Unity; namespace MVVMLightUnityExample { public class MainViewModel : ViewModelBase { private IDialogService _dialogs; private ILoggerService _logger; /// <summary> /// Initializes a new instance of the MainViewModel class. This default constructor calls the /// non-default constructor resolving the interfaces used by this view model. /// </summary> public MainViewModel() : this(ViewModelLocator.Container.Resolve<IDialogService>(), ViewModelLocator.Container.Resolve<ILoggerService>()) { if (IsInDesignMode) { // Code runs in Blend --> create design time data. } else { // Code runs "for real" } } /// <summary> /// Initializes a new instance of the MainViewModel class. /// Interfaces are automatically resolved by the IoC container. /// </summary> /// <param name="dialogs">Interface to dialog service</param> /// <param name="logger">Interface to logger service</param> public MainViewModel(IDialogService dialogs, ILoggerService logger) { _dialogs = dialogs; _logger = logger; if (IsInDesignMode) { // Code runs in Blend --> create design time data. _dialogs.ShowMessage("Running in design-time mode!", "Injection Constructor", DialogButton.OK, DialogImage.Information); _logger.WriteLine("Running in design-time mode!"); } else { // Code runs "for real" _dialogs.ShowMessage("Running in run-time mode!", "Injection Constructor", DialogButton.OK, DialogImage.Information); _logger.WriteLine("Running in run-time mode!"); } } public override void Cleanup() { // Clean up if needed _dialogs = null; _logger = null; base.Cleanup(); } } } And the OrderViewModel class: using GalaSoft.MvvmLight; using Microsoft.Practices.Unity; namespace MVVMLightUnityExample { /// <summary> /// This class contains properties that a View can data bind to. /// <para> /// Use the <strong>mvvminpc</strong> snippet to add bindable properties to this ViewModel. /// </para> /// <para> /// You can also use Blend to data bind with the tool's support. /// </para> /// <para> /// See http://www.galasoft.ch/mvvm/getstarted /// </para> /// </summary> public class OrderViewModel : ViewModelBase { private const string testOrderNumber = "123456"; private Order _order; /// <summary> /// Initializes a new instance of the OrderViewModel class. /// </summary> public OrderViewModel() : this(testOrderNumber) { } /// <summary> /// Initializes a new instance of the OrderViewModel class. /// </summary> public OrderViewModel(string orderNumber) { if (IsInDesignMode) { // Code runs in Blend --> create design time data. _order = new Order(orderNumber, "My Company", "Our Address"); } else { _order = GetOrder(orderNumber); } } public override void Cleanup() { // Clean own resources if needed _order = null; base.Cleanup(); } } } And the code that could be used to display an order view for a specific order: public void ShowOrder(string orderNumber) { // pass the order number to show to ViewModelLocator to be injected //into the constructor of the OrderViewModel instance ViewModelLocator.OrderToShow = orderNumber; View.OrderView orderView = new View.OrderView(); } These examples have been stripped down to show only the IoC ideas. It took a lot of trial and error, searching the internet for examples, and finding out that the Unity 2.0 documentation is lacking (at best) to come up with this solution. Let me know if you think it could be improved.

    Read the article

  • Customizing UIPickerView in xcode

    - by Srinivas G
    Hi, I developed an application which consists of UIPickerView....It displays as default size in height of the UIPickerView.I want to display the UIPickerView as 320x480 size(iphone simulator size)..so the whole screen has the picker view without using transorm property,because it will stretched the whole view....its not looking good...even with selection indicator also stretched...I need not stretchable picker view..but need to show the picker view with the whole screen..We can control the width of the UIPickerView..But how can we control the height of the UIPickerView..... Thanks & Regards...

    Read the article

  • How can you add a UIGestureRecognizer to a UIBarButtonItem as in the common undo/redo UIPopoverContr

    - by SG
    Problem In my iPad app, I cannot attach a popover to a button bar item only after press-and-hold events. But this seems to be standard for undo/redo. How do other apps do this? Background I have an undo button (UIBarButtonSystemItemUndo) in the toolbar of my UIKit (iPad) app. When I press the undo button, it fires it's action which is undo:, and that executes correctly. However, the "standard UE convention" for undo/redo on iPad is that pressing undo executes an undo but pressing and holding the button reveals a popover controller where the user selected either "undo" or "redo" until the controller is dismissed. The normal way to attach a popover controller is with presentPopoverFromBarButtonItem:, and I can configure this easily enough. To get this to show only after press-and-hold we have to set a view to respond to "long press" gesture events as in this snippet: UILongPressGestureRecognizer *longPressOnUndoGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleLongPressOnUndoGesture:)]; //Broken because there is no customView in a UIBarButtonSystemItemUndo item [self.undoButtonItem.customView addGestureRecognizer:longPressOnUndoGesture]; [longPressOnUndoGesture release]; With this, after a press-and-hold on the view the method handleLongPressOnUndoGesture: will get called, and within this method I will configure and display the popover for undo/redo. So far, so good. The problem with this is that there is no view to attach to. self.undoButtonItem is a UIButtonBarItem, not a view. Possible solutions 1) [The ideal] Attach the gesture recognizer to the button bar item. It is possible to attach a gesture recognizer to a view, but UIButtonBarItem is not a view. It does have a property for .customView, but that property is nil when the buttonbaritem is a standard system type (in this case it is). 2) Use another view. I could use the UIToolbar but that would require some weird hit-testing and be an all around hack, if even possible in the first place. There is no other alternative view to use that I can think of. 3) Use the customView property. Standard types like UIBarButtonSystemItemUndo have no customView (it is nil). Setting the customView will erase the standard contents which it needs to have. This would amount to re-implementing all the look and function of UIBarButtonSystemItemUndo, again if even possible to do. Question How can I attach a gesture recognizer to this "button"? More specifically, how can I implement the standard press-and-hold-to-show-redo-popover in an iPad app? Ideas? Thank you very much, especially if someone actually has this working in their app (I'm thinking of you, omni) and wants to share...

    Read the article

  • ASP.NET MVC: routing help

    - by pcampbell
    Consider two methods on the controller CustomerController.cs: //URL to be http://mysite/Customer/ public ActionResult Index() { return View("ListCustomers"); } //URL to be http://mysite/Customer/8 public ActionResult View(int id) { return View("ViewCustomer"); } How would you setup your routes to accommodate this requirement? How would you use Html.ActionLink when creating a link to the View page?

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >