Search Results

Search found 1047 results on 42 pages for 'restrict'.

Page 40/42 | < Previous Page | 36 37 38 39 40 41 42  | Next Page >

  • QValidator for hex input

    - by Evan Teran
    I have a Qt widget which should only accept a hex string as input. It is very simple to restrict the input characters to [0-9A-Fa-f], but I would like to have it display with a delimiter between "bytes" so for example if the delimiter is a space, and the user types 0011223344 I would like the line edit to display 00 11 22 33 44 Now if the user presses the backspace key 3 times, then I want it to display 00 11 22 3. I almost have what i want, so far there is only one subtle bug involving using the delete key to remove a delimiter. Does anyone have a better way to implement this validator? Here's my code so far: class HexStringValidator : public QValidator { public: HexStringValidator(QObject * parent) : QValidator(parent) {} public: virtual void fixup(QString &input) const { QString temp; int index = 0; // every 2 digits insert a space if they didn't explicitly type one Q_FOREACH(QChar ch, input) { if(std::isxdigit(ch.toAscii())) { if(index != 0 && (index & 1) == 0) { temp += ' '; } temp += ch.toUpper(); ++index; } } input = temp; } virtual State validate(QString &input, int &pos) const { if(!input.isEmpty()) { // TODO: can we detect if the char which was JUST deleted // (if any was deleted) was a space? and special case this? // as to not have the bug in this case? const int char_pos = pos - input.left(pos).count(' '); int chars = 0; fixup(input); pos = 0; while(chars != char_pos) { if(input[pos] != ' ') { ++chars; } ++pos; } // favor the right side of a space if(input[pos] == ' ') { ++pos; } } return QValidator::Acceptable; } }; For now this code is functional enough, but I'd love to have it work 100% as expected. Obviously the ideal would be the just separate the display of the hex string from the actual characters stored in the QLineEdit's internal buffer but I have no idea where to start with that and I imagine is a non-trivial undertaking. In essence, I would like to have a Validator which conforms to this regex: "[0-9A-Fa-f]( [0-9A-Fa-f])*" but I don't want the user to ever have to type a space as delimiter. Likewise, when editing what they types, the spaces should be managed implicitly.

    Read the article

  • User Control as container at design time

    - by Luca
    I'm designing a simple expander control. I've derived from UserControl, drawn inner controls, built, run; all ok. Since an inner Control is a Panel, I'd like to use it as container at design time. Indeed I've used the attributes: [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] Great I say. But it isn't... The result is that I can use it as container at design time but: The added controls go back the inner controls already embedded in the user control Even if I push to top a control added at design time, at runtime it is back again on controls embedded to the user control I cannot restrict the container area at design time into a Panel area What am I missing? Here is the code for completeness... why this snippet of code is not working? [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] public partial class ExpanderControl : UserControl { public ExpanderControl() { InitializeComponent(); .... [System.Security.Permissions.PermissionSet(System.Security.Permissions.SecurityAction.Demand, Name = "FullTrust")] internal class ExpanderControlDesigner : ControlDesigner { private ExpanderControl MyControl; public override void Initialize(IComponent component) { base.Initialize(component); MyControl = (ExpanderControl)component; // Hook up events ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); s.SelectionChanged += new EventHandler(OnSelectionChanged); c.ComponentRemoving += new ComponentEventHandler(OnComponentRemoving); } private void OnSelectionChanged(object sender, System.EventArgs e) { } private void OnComponentRemoving(object sender, ComponentEventArgs e) { } protected override void Dispose(bool disposing) { ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); // Unhook events s.SelectionChanged -= new EventHandler(OnSelectionChanged); c.ComponentRemoving -= new ComponentEventHandler(OnComponentRemoving); base.Dispose(disposing); } public override System.ComponentModel.Design.DesignerVerbCollection Verbs { get { DesignerVerbCollection v = new DesignerVerbCollection(); v.Add(new DesignerVerb("&asd", new EventHandler(null))); return v; } } } I've found many resources (Interaction, designed, limited area), but nothing was usefull for being operative... Actually there is a trick, since System.Windows.Forms classes can be designed (as usual) and have a correct behavior at runtime (TabControl, for example).

    Read the article

  • Auto populate input based on file name with AngularJS

    - by LouieV
    I am playing around with AngularJS and have not been able to solve this problem. I have a view that has a form to upload a file to a node server. So far I have manage to do this using some directives and a service. I allow the user to send a custom name to the POST data if they desire. What I wan to accomplish is that when the user selects a file the filename models auto populates. My view looks like: <div> <input file-model="phpFile" type="file"> <input name="filename" type="text" ng-model="filename"> <button ng-click="send()">send</button> </div> file-model is my directive that allows the file to be assigned to a scope. myApp.directive('fileModel', ['$parse', function($parse) { return { restrict: 'A', link: function(scope, element, attrs) { var model = $parse.(attrs.fileModel); var modelSetter = model.assign; element.bind('change', function() { scope.$apply(function() { modelSetter(scope, element[0].files[0]); }); }); } }]); The service: myApp.service('fileUpload', ['$http', function($http){ this.uploadFileToUrl = function(file, uploadUrl, optionals) { var fd = new FormData(); fd.append('file', file); for (var key in file) { fd.append(key, file[key]); } for(var i = 0; i < optionals.length; i++){ fd.append(optionals[i].name, optionals[i].data); } }); }]); Here as you can see I pass the file, append its properties, and append any optional properties. In the controller is where I am having the troubles. I have tried $watch and using the file-model but I get the same error either way. myApp.controller('AddCtrl', function($scope, $location, PEberry, fileUpload){ //$scope.$watch(function() { // return $scope.phpFile; //},function(newValue, oldValue) { // $scope.filename = $scope.phpFile.name; //}, true); // if ($scope.phpFiles) { // $scope.filename = $scope.phpFiles.name; // } $scope.send = function() { var uploadUrl = "/files"; var file = $scope.phpFile; //var opts = [{ name: "uname", data: file.name }] fileUpload.uploadFileToUrl(file, uploadUrl); }; }); Thank you for your help!

    Read the article

  • Coroutines in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7?

    Read the article

  • Ruby on Rails: jQuery datepicker - dates between validation

    - by Jazz
    I have an app that allows a user to create new projects, and the search for them later. One of the options they have when creating a project is giving them start and end dates. At the moment all the code works properly for creating and searching on the dates, but I am now wanting to restrict what dates the user can enter. I am needing for an error to flag up when the user tries to enter an end date that is before the start date. It's really more for when the user is creating the project. Here is my code so far = Application.js //= require jquery //= require jquery_ujs //= require jquery-ui //= require jquery.ui.all //= require_tree . $(function() { $("#project_start_date").datepicker({dateFormat: 'dd-mm-yy'}); }); $(function() { $("#project_end_date").datepicker({dateFormat: 'dd-mm-yy'}); }); jQuery(function(){ jQuery('#start_date_A').datepicker({dateFormat: "dd-mm-yy"}); }); jQuery(function(){ jQuery('#start_date_B').datepicker({dateFormat: "dd-mm-yy"}); }); New View: <div class="start_date" STYLE="text-align: left;"> <b>Start Date:</b> <%= f.text_field :start_date, :class => 'datepicker', :style => 'width: 80px;' %> </div> <div class="end_date" STYLE="text-align: left;"> <b>End Date:</b> <%= f.text_field :end_date, :class => 'datepicker', :style => 'width: 80px;' %> </div> Search View: Start dates between <%= text_field_tag :start_date_A, params[:start_date_A], :style => 'width: 80px;' %> - <%= text_field_tag :start_date_B, params[:start_date_B], :style => 'width: 80px;' %></br> I tried following examples online to get this to work by doing this in the application.js file: $(function() { $("#project_start_date,#project_end_date").datepicker({dateFormat: 'dd-mm-yy'}); }); jQuery(function(){ jQuery('#start_date_A,#start_date_B').datepicker({dateFormat: "dd-mm-yy"}); }); But then the script doesn't run. I am new to rails and javascript so any help at all is appreciated. Thanks in advance. UPDATE: Don't know why my question has been voted to be closed. It's quite simple: I need an error to flag up when the user tries to enter an end date that is before the start date. How can I do that??

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • User Control as container

    - by Luca
    I'm designing a simple expander control. I've derived from UserControl, drawn inner controls, built, run; all ok. Since an inner Control is a Panel, I'd like to use it as container at design time. Indeed I've used the attributes: [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] Great I say. But it isn't... The result is that I can use it as container at design time but: The added controls go back the inner controls already embedded in the user control Even if I push to top a control added at design time, at runtime it is back again on controls embedded to the user control I cannot restrict the container area at design time into a Panel area What am I missing? Here is the code for completeness... why this snippet of code is not working? [Designer(typeof(ExpanderControlDesigner))] [Designer("System.Windows.Forms.Design.ParentControlDesigner, System.Design", typeof(IDesigner))] public partial class ExpanderControl : UserControl { public ExpanderControl() { InitializeComponent(); .... [System.Security.Permissions.PermissionSet(System.Security.Permissions.SecurityAction.Demand, Name = "FullTrust")] internal class ExpanderControlDesigner : ControlDesigner { private ExpanderControl MyControl; public override void Initialize(IComponent component) { base.Initialize(component); MyControl = (ExpanderControl)component; // Hook up events ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); s.SelectionChanged += new EventHandler(OnSelectionChanged); c.ComponentRemoving += new ComponentEventHandler(OnComponentRemoving); } private void OnSelectionChanged(object sender, System.EventArgs e) { } private void OnComponentRemoving(object sender, ComponentEventArgs e) { } protected override void Dispose(bool disposing) { ISelectionService s = (ISelectionService)GetService(typeof(ISelectionService)); IComponentChangeService c = (IComponentChangeService)GetService(typeof(IComponentChangeService)); // Unhook events s.SelectionChanged -= new EventHandler(OnSelectionChanged); c.ComponentRemoving -= new ComponentEventHandler(OnComponentRemoving); base.Dispose(disposing); } public override System.ComponentModel.Design.DesignerVerbCollection Verbs { get { DesignerVerbCollection v = new DesignerVerbCollection(); v.Add(new DesignerVerb("&asd", new EventHandler(null))); return v; } } } I've found many resources (Interaction, designed, limited area), but nothing was usefull for being operative...

    Read the article

  • MySQL Query to find consecutive available times of variable lenth

    - by Armaconn
    I have an events table that has user_id, date ('2013-10-01'), time ('04:15:00'), and status_id; What I am looking to find is a solution similar to http://stackoverflow.com/questions/2665574/find-consecutive-rows-calculate-duration but I need I need two additional components: 1) Take date into consideration, so 10/1/2013 at 11:00 PM - 10/2/2013 at 3:00AM. Feel free to just put in a fake date range (like '2013-10-01' to '2013-10-31') 2) Limit output to only include when there are 4+ consecutive times (each event is 15 minutes and I want it to display minimum blocks of an hour, but would also like to be able to switch this restriction to 1.5 hours or some other duration if possible). SUMMARY - Looking for a query that provides the start and end times for a set of events that have the same user_id, status_id, and are in a continuous series based on date and time. For which I can restrict results based on date range and minimum series duration. So the output should have: user_id, date_start, time_start, date_end, time_end, status_id, duration CREATE TABLE `events` ( `event_id` int(11) NOT NULL auto_increment COMMENT 'ID', `user_id` int(11) NOT NULL, `date` date NOT NULL, `time` time NOT NULL, `status_id` int(11) default NULL, PRIMARY KEY (`event_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1568 ; INSERT INTO `events` VALUES(1, 101, '2013-08-14', '23:00:00', 2); INSERT INTO `events` VALUES(2, 101, '2013-08-14', '23:15:00', 2); INSERT INTO `events` VALUES(3, 101, '2013-08-14', '23:30:00', 2); INSERT INTO `events` VALUES(4, 101, '2013-08-14', '23:45:00', 2); INSERT INTO `events` VALUES(5, 101, '2013-08-15', '00:00:00', 2); INSERT INTO `events` VALUES(6, 101, '2013-08-15', '00:15:00', 1); INSERT INTO `events` VALUES(7, 500, '2013-08-14', '23:45:00', 1); INSERT INTO `events` VALUES(8, 500, '2013-08-15', '00:00:00', 1); INSERT INTO `events` VALUES(9, 500, '2013-08-15', '00:15:00', 2); INSERT INTO `events` VALUES(10, 500, '2013-08-15', '00:30:00', 2); INSERT INTO `events` VALUES(11, 500, '2013-08-15', '00:45:00', 1); Desired output row |user_id | date_start | time_start | date_end | time_end | status_id | duration 1 |101 |'2013-08-14'| '23:00:00' |'2013-08-15'|'00:15:00'| 2 | 5 2 |101 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:30:00'| 1 | 1 3 |500 |'2013-08-14'| '00:23:45' |'2013-08-15'|'00:15:00'| 1 | 2 4 |500 |'2013-08-15'| '00:00:15' |'2013-08-15'|'00:45:00'| 2 | 2 5 |500 |'2013-08-15'| '00:00:45' |'2013-08-15'|'01:00:00'| 2 | 1 *except that rows 2 and 5 wouldn't appear if duration had to be greater than 30 minutes Thanks for any help that you can provide! And please let me know if there is anything I can further clarify!!

    Read the article

  • IE8 ignores absolute positioning and margin:auto

    - by tuff
    I have a lightbox-style div with scrolling content that I am trying to restrict to a reasonable size within the viewport. I also want this div to be horizontally centered. This is all easy in Fx/Chrome/IE9. My problem is that IE8 ignores the absolute positioning which I use to size the content, and the rule margin: 0 auto which I use to horizontally center the lightbox. 1) Why? 2) What are my options for workarounds? EDIT: The centering issue is fixed by setting text-align:center on the parent element, but I have no idea why that works since the element I want to center is not inline. Still stuck on the absolute positioning stuff. HTML: <div class="bg"> <div class="a"> <div class="aa">titlebar</div> <div class="b"> <!-- many lines of content here --> </div> </div> </div> CSS: body { overflow: hidden; height: 100%; margin: 0; padding: 0; } /* IE8 needs ruleset above */ .bg { background: #333; position: fixed; top: 0; right: 0; bottom: 0; left: 0; height: 100%; /* needed in IE8 or the bg will only be as tall as the lightbox */ } .a { background: #eee; border: 3px solid #000; height: 80%; max-height: 800px; min-height: 200px; margin: 0 auto; position: relative; width: 80%; min-width: 200px; max-width: 800px; } .aa { background: lightblue; height: 28px; line-height: 28px; text-align: center; } .b { background: coral; overflow: auto; padding: 20px; position: absolute; top: 30px; right: 0; bottom: 0; left: 0; } Here's a demo of the problem: http://jsbin.com/urikoj/1/edit

    Read the article

  • How to properly mix generics and inheritance to get the desired result?

    - by yamsha
    My question is not easy to explain using words, fortunately it's not too difficult to demonstrate. So, bear with me: public interface Command<R> { public R execute();//parameter R is the type of object that will be returned as the result of the execution of this command } public abstract class BasicCommand<R> { } public interface CommandProcessor<C extends Command<?>> { public <R> R process(C<R> command);//this is my question... it's illegal to do, but you understand the idea behind it, right? } //constrain BasicCommandProcessor to commands that subclass BasicCommand public class BasicCommandProcessor implements CommandProcessor<C extends BasicCommand<?>> { //here, only subclasses of BasicCommand should be allowed as arguments but these //BasicCommand object should be parameterized by R, like so: BasicCommand<R> //so the method signature should really be // public <R> R process(BasicCommand<R> command) //which would break the inheritance if the interface's method signature was instead: // public <R> R process(Command<R> command); //I really hope this fully illustrates my conundrum public <R> R process(C<R> command) { return command.execute(); } } public class CommandContext { public static void main(String... args) { BasicCommandProcessor bcp = new BasicCommandProcessor(); String textResult = bcp.execute(new BasicCommand<String>() { public String execute() { return "result"; } }); Long numericResult = bcp.execute(new BasicCommand<Long>() { public Long execute() { return 123L; } }); } } Basically, I want the generic "process" method to dictate the type of generic parameter of the Command object. The goal is to be able to restrict different implementations of CommandProcessor to certain classes that implement Command interface and at the same time to able to call the process method of any class that implements the CommandProcessor interface and have it return the object of type specified by the parametarized Command object. I'm not sure if my explanation is clear enough, so please let me know if further explanation is needed. I guess, the question is "Would this be possible to do, at all?" If the answer is "No" what would be the best work-around (I thought of a couple on my own, but I'd like some fresh ideas)

    Read the article

  • Should the argument be passed by reference in this .net example?

    - by Hamish Grubijan
    I have used Java, C++, .Net. (in that order). When asked about by-value vs. by-ref on interviews, I have always done well on that question ... perhaps because nobody went in-depth on it. Now I know that I do not see the whole picture. I was looking at this section of code written by someone else: XmlDocument doc = new XmlDocument(); AppendX(doc); // Real name of the function is different AppendY(doc); // ditto When I saw this code, I thought: wait a minute, should not I use a ref in front of doc variable (and modify AppendX/Y accordingly? it works as written, but made me question whether I actually understand the ref keyword in C#. As I thought about this more, I recalled early Java days (college intro language). A friend of mine looked at some code I have written and he had a mental block - he kept asking me which things are passed in by reference and when by value. My ignorant response was something like: Dude, there is only one kind of arg passing in Java and I forgot which one it is :). Chill, do not over-think and just code. Java still does not have a ref does it? Yet, Java hackers seem to be productive. Anyhow, coding in C++ exposed me to this whole by reference business, and now I am confused. Should ref be used in the example above? I am guessing that when ref is applied to value types: primitives, enums, structures (is there anything else in this list?) it makes a big difference. And ... when applied to objects it does not because it is all by reference. If things were so simple, then why would not the compiler restrict the usage of ref keyword to a subset of types. When it comes to objects, does ref serve as a comment sort of? Well, I do remember that there can be problems with null and ref is also useful for initializing multiple elements within a method (since you cannot return multiple things with the same easy as you would do in Python). Thanks.

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • Week in Geek: US Govt E-card Scam Siphons Confidential Data Edition

    - by Asian Angel
    This week we learned how to “back up photos to Flickr, automate repetitive tasks, & normalize MP3 volume”, enable “stereo mix” in Windows 7 to record audio, create custom papercraft toys, read up on three alternatives to Apple’s flaky iOS alarm clock, decorated our desktops & app docks with Google icon packs, and more. Photo by alexschlegel. Random Geek Links It has been a busy week on the security & malware fronts and we have a roundup of the latest news to help keep you updated. Photo by TopTechWriter.US. US govt e-card scam hits confidential data A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations. Security tool uncovers multiple bugs in every browser Michal Zalewski reports that he discovered the vulnerability in Internet Explorer a while ago using his cross_fuzz fuzzing tool and reported it to Microsoft in July 2010. Zalewski also used cross_fuzz to discover bugs in other browsers, which he also reported to the relevant organisations. Microsoft to fix Windows holes, but not ones in IE Microsoft said that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks. Microsoft warns of Windows flaw affecting image rendering Microsoft has warned of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights. Windows 7 Not Affected by Critical 0-Day in the Windows Graphics Rendering Engine While confirming that details on a Critical zero-day vulnerability have made their way into the wild, Microsoft noted that customers running the latest iteration of Windows client and server platforms are not exposed to any risks. Microsoft warns of Office-related malware Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail. *Refers to a flaw that was addressed in the November security patch releases. Make sure you have all of the latest security updates installed. Unpatched hole in ImgBurn disk burning application According to security specialist Secunia, a highly critical vulnerability in ImgBurn, a lightweight disk burning application, can be used to remotely compromise a user’s system. Hole in VLC Media Player Virtual Security Research (VSR) has identified a vulnerability in VLC Media Player. In versions up to and including 1.1.5 of the VLC Media Player. Flash Player sandbox can be bypassed Flash applications run locally can read local files and send them to an online server – something which the sandbox is supposed to prevent. Chinese auction site touts hacked iTunes accounts Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints. What happened in the recent Hotmail outage Mike Schackwitz explains the cause of the recent Hotmail outage. DOJ sends order to Twitter for Wikileaks-related account info The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer. Google gets court to block Microsoft Interior Department e-mail win The U.S. Federal Claims Court has temporarily blocked Microsoft from proceeding with the $49.3 million, five-year DOI contract that it won this past November. Google Apps customers get email lockdown Companies and organisations using Google Apps are now able to restrict the email access of selected users. LibreOffice Is the Default Office Suite for Ubuntu 11.04 Matthias Klose has announced some details regarding the replacement of the old OpenOffice.org 3.2.1 packages with the new LibreOffice 3.3 ones, starting with the upcoming Ubuntu 11.04 (Natty Narwhal) Alpha 2 release. Sysadmin Geek Tips Photo by Filomena Scalise. How to Setup Software RAID for a Simple File Server on Ubuntu Do you need a file server that is cheap and easy to setup, “rock solid” reliable, and has Email Alerting? This tutorial shows you how to use Ubuntu, software RAID, and SaMBa to accomplish just that. How to Control the Order of Startup Programs in Windows While you can specify the applications you want to launch when Windows starts, the ability to control the order in which they start is not available. However, there are a couple of ways you can easily overcome this limitation and control the startup order of applications. Random TinyHacker Links Using Opera Unite to Send Large Files A tutorial on using Opera Unite to easily send huge files from your computer. WorkFlowy is a Useful To-do List Tool A cool to-do list tool that lets you integrate multiple tasks in one single list easily. Playing Flash Videos on iOS Devices Yes, you can play flash videos on jailbroken iPhones. Here’s a tutorial. Clear Safari History and Cookies On iPhone A tutorial on clearing your browser history on iPhone and other iOS devices. Monitor Your Internet Usage Here’s a cool, cross-platform tool to monitor your internet bandwidth. Super User Questions See what the community had to say on these popular questions from Super User this week. Why is my upload speed much less than my download speed? Where should I find drivers for my laptop if it didn’t come with a driver disk? OEM Office 2010 without media – how to reinstall? Is there a point to using theft tracking software like Prey on my laptop, if you have login security? Moving an “all-in-one” PC when turned on/off How-To Geek Weekly Article Recap Get caught up on your HTG reading with our hottest articles from this past week. How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk How To Boot 10 Different Live CDs From 1 USB Flash Drive What is Camera Raw, and Why Would a Professional Prefer it to JPG? Did You Know Facebook Has Built-In Shortcut Keys? The How-To Geek Guide to Audio Editing: The Basics One Year Ago on How-To Geek Enjoy looking through our latest gathering of retro article goodness. Learning Windows 7: Create a Homegroup & Join a New Computer To It How To Disconnect a Machine from a Homegroup Use Remote Desktop To Access Other Computers On a Small Office or Home Network How To Share Files and Printers Between Windows 7 and Vista Allow Users To Run Only Specified Programs in Windows 7 The Geek Note That is all we have for you this week and we hope your first week back at work or school has gone very well now that the holidays are over. Know a great tip? Send it in to us at [email protected]. Photo by Pamela Machado. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • Security Trimmed Cross Site Collection Navigation

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This article will serve as documentation of a fully functional codeplex project that I just created. This project will give you a WebPart that will give you security trimmed navigation across site collections. The first question is, why create such a project? In every single SharePoint project you will do, one question you will always be faced with is, what should the boundaries of sites be, and what should the boundaries of site collections be? There is no good or bad answer to this, because it really really depends on your needs. There are some factors in play here. Site Collections will allow you to scale, as a Site collection is the smallest entity you can put inside a content database Site collections will allow you to offer different levels of SLAs, because you put a site collection on a separate content database, and put that database on a separate server. Site collections are a security boundary – and they can be moved around at will without affecting other site collections. Site collections are also a branding boundary. They are also a feature deployment boundary, so you can have two site collections on the same web application with completely different nature of services. But site collections break navigation, i.e. a site collection at “/”, and a site collection at “/sites/mySiteCollection”, are completely independent of each other. If you have access to both, the navigation of / won’t show you a link to /sites/mySiteCollection. Some people refer to this as a huge issue in SharePoint. Luckily, some workarounds exist. A long time ago, I had blogged about “Implementing Consistent Navigation across Site Collections”. That approach was a no-code solution, it worked – it gave you a consistent navigation across site collections. But, it didn’t work in a security trimmed fashion! i.e., if I don’t have access to Site Collection ‘X’, it would still show me a link to ‘X’. Well this project gets around that issue. Simply deploy this project, and it’ll give you a WebPart. You can use that WebPart as either a webpart or as a server control dropped via SharePoint designer, and it will give you Security Trimmed Cross Site Collection Navigation. The code has been written for SP2010, but it will work in SP2007 with the help of http://spwcfsupport.codeplex.com . What do I need to do to make it work? I’m glad you asked! Simple! Deploy the .wsp (which you can download here). This will give you a site collection feature called “Winsmarts Cross Site Collection Navigation” as shown below. Go ahead and activate it, and this will give you a WebPart called “Winsmarts Navigation Web Part” as shown below: Just drop this WebPart on your page, and it will show you all site collections that the currently logged in user has access to. Really it’s that easy! This is shown as below - In the above example, I have two site collections that I created at /sites/SiteCollection1 and /sites/SiteCollection2. The navigation shows the titles. You see some extraneous crap as well, you might want to clean that – I’ll talk about that in a minute. What? You’re running into problems? If the problem you’re running into is that you are prompted to login three times, and then it shows a blank webpart that says “Loading your applications ..” and then craps out!, then most probably you’re using a different authentication scheme. Behind the scenes I use a custom WCF service to perform this job. OOTB, I’ve set it to work with NTLM, but if you need to make it work alternate authentications such as forms based auth, or client side certs, you will need to edit the %14%\ISAPI\Winsmarts.CrossSCNav\web.config file, specifically, this section - 1: <bindings> 2: <webHttpBinding> 3: <binding name="customWebHttpBinding"> 4: <security mode="TransportCredentialOnly"> 5: <transport clientCredentialType="Ntlm"/> 6: </security> 7: </binding> 8: </webHttpBinding> 9: </bindings> For Kerberos, change the “clientCredentialType” to “Windows” For Forms auth, remove that transport line For client certs – well that’s a bit more involved, but it’s just web.config changes – hit a good book on WCF or hire me for a billion trillion $. But fair warning, I might be too busy to help immediately. If you’re running into a different problem, please leave a comment below, but the code is pretty rock solid, so .. hmm .. check what you’re doing! BTW, I don’t  make any guarantee/warranty on this – if this code makes you sterile, unpopular, bad hairstyle, anything else, that is your problem! But, there are some known issues - I wrote this as a concept – you can easily extend it to be more flexible. Example, hierarchical nav, or, horizontal nav, jazzy effects with jquery or silverlight– all those are possible very very easily. This webpart is not smart enough to co-exist with another instance of itself on the same page. I can easily extend it to do so, which I will do in my spare(!?) time! Okay good! But that’s not all! As you can see, just dropping the WebPart may show you many extraneous site collections, or maybe you want to restrict which site collections are shown, or exclude a certain site collection to be shown from the navigation. To support that, I created a property on the WebPart called “UrlMatchPattern”, which is a regex expression you specify to trim the results :). So, just edit the WebPart, and specify a string property of “http://sp2010/sites/” as shown below. Note that you can put in whatever regex expression you want! So go crazy, I don’t care! And this gives you a cleaner look.   w00t! Enjoy! Comment on the article ....

    Read the article

  • iptables not allowing mysql connections to aliased ips?

    - by Curtis
    I have a fairly simple iptables firewall on a server that provides MySQL services, but iptables seems to be giving me very inconsistent results. The default policy on the script is as follows: iptables -P INPUT DROP I can then make MySQL public with the following rule: iptables -A INPUT -p tcp --dport 3306 -j ACCEPT With this rule in place, I can connect to MySQL from any source IP to any destination IP on the server without a problem. However, when I try to restrict access to just three IPs by replacing the above line with the following, I run into trouble (xxx=masked octect): iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.184 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.196 -j ACCEPT iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.XXX.XXX.251 -j ACCEPT Once the above rules are in place, the following happens: I can connect to the MySQL server from the .184, .196 and .251 hosts just fine as long as am connecting to the MySQL server using it's default IP address or an IP alias in the same subnet as the default IP address. I am unable to connect to MySQL using IP aliases that are assigned to the server from a different subnet than the server's default IP when I'm coming from the .184 or .196 hosts, but .251 works just fine. From the .184 or .196 hosts, a telnet attempt just hangs... # telnet 209.xxx.xxx.22 3306 Trying 209.xxx.xxx.22... If I remove the .251 line (making .196 the last rule added), the .196 host still can not connect to MySQL using IP aliases (so it's not the order of the rules that is causing the inconsistent behavior). I know, this particular test was silly as it shouldn't matter what order these three rules are added in, but I figured someone might ask. If I switch back to the "public" rule, all hosts can connect to the MySQL server using either the default or aliased IPs (in either subnet): iptables -A INPUT -p tcp --dport 3306 -j ACCEPT The server is running in a CentOS 5.4 OpenVZ/Proxmox container (2.6.32-4-pve). And, just in case you prefer to see the problem rules in the context of the iptables script, here it is (xxx=masked octect): # Flush old rules, old custom tables /sbin/iptables --flush /sbin/iptables --delete-chain # Set default policies for all three default chains /sbin/iptables -P INPUT DROP /sbin/iptables -P FORWARD DROP /sbin/iptables -P OUTPUT ACCEPT # Enable free use of loopback interfaces /sbin/iptables -A INPUT -i lo -j ACCEPT /sbin/iptables -A OUTPUT -o lo -j ACCEPT # All TCP sessions should begin with SYN /sbin/iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Accept inbound TCP packets (Do this *before* adding the 'blocked' chain) /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow the server's own IP to connect to itself /sbin/iptables -A INPUT -i eth0 -s 208.xxx.xxx.178 -j ACCEPT # Add the 'blocked' chain *after* we've accepted established/related connections # so we remain efficient and only evaluate new/inbound connections /sbin/iptables -N BLOCKED /sbin/iptables -A INPUT -j BLOCKED # Accept inbound ICMP messages /sbin/iptables -A INPUT -p ICMP --icmp-type 8 -j ACCEPT /sbin/iptables -A INPUT -p ICMP --icmp-type 11 -j ACCEPT # ssh (private) /sbin/iptables -A INPUT -p tcp --dport 22 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # ftp (private) /sbin/iptables -A INPUT -p tcp --dport 21 -m state --state NEW -s xxx.xxx.xxx.xxx -j ACCEPT # www (public) /sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 443 -j ACCEPT # smtp (public) /sbin/iptables -A INPUT -p tcp --dport 25 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 2525 -j ACCEPT # pop (public) /sbin/iptables -A INPUT -p tcp --dport 110 -j ACCEPT # mysql (private) /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.184 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.196 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 3306 -m state --state NEW -s 208.xxx.xxx.251 -j ACCEPT Any ideas? Thanks in advance. :-)

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • Securing smtp with login

    - by Paul Peelen
    I have a ispconfig server, and it seems that someone is using it to send spam. I got about 130 "Mail Delivery System" email about declined send email. This spammer uses my email address as sent from adress, so I get all these email adresses to my mail. I am using Postfix and Courier. I installed my server according to this guide: http://www.howtoforge.com/perfect-server-debian-lenny-ispconfig3-p3 I did this a few months ago. My question: Can I secure my server to require login to be able to send email, and if so... how? Thanks! EDIT Some data from mail.log, these kind of error show up constantly: Jun 15 17:58:16 bolt postfix/qmgr[10712]: CC7DA1242AE: from=<paul@*****.se>, size=3782, nrcpt=1 (queue active) Jun 15 17:58:16 bolt postfix/smtp[11337]: CC7DA1242AE: to=<[email protected]>, relay=none, delay=4641, delays=4640/0.01/0.32/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=cmlisboa.pt type=MX: Host not found, try again) Jun 15 17:58:19 bolt postfix/smtpd[10836]: connect from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:20 bolt postfix/smtpd[10836]: NOQUEUE: reject: RCPT from static-200-105-220-154.acelerate.net[200.105.220.154]: 550 5.1.1 <advertising@*****.com>: Recipient address rejected: User unknown in virtual mailbox table; from=<[email protected]> to=<advertising@*****.com> proto=ESMTP helo=<static-200-105-220-154.acelerate.net> Jun 15 17:58:20 bolt postfix/smtpd[10836]: lost connection after DATA (0 bytes) from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:20 bolt postfix/smtpd[10836]: disconnect from static-200-105-220-154.acelerate.net[200.105.220.154] Jun 15 17:58:29 bolt postfix/smtpd[10834]: connect from unknown[62.176.172.226] Jun 15 17:58:32 bolt postfix/smtpd[10834]: 386791241F9: client=unknown[62.176.172.226] Jun 15 17:58:34 bolt postfix/cleanup[10975]: 386791241F9: message-id=<[email protected]> Jun 15 17:58:34 bolt postfix/qmgr[10712]: 386791241F9: from=<[email protected]>, size=867, nrcpt=1 (queue active) Jun 15 17:58:35 bolt postfix/smtpd[10834]: disconnect from unknown[62.176.172.226] Jun 15 17:58:35 bolt amavis[11084]: (11084-17) Blocked SPAM, [62.176.172.226] [62.176.172.226] <[email protected]> -> <*****@*****>, Message-ID: <[email protected]>, mail_id: XczovKoMBYNr, Hits: 18.471, size: 867, 833 ms Jun 15 17:58:35 bolt postfix/smtp[10732]: 386791241F9: to=<*****@*****>, relay=127.0.0.1[127.0.0.1]:10024, delay=3.5, delays=2.7/0/0/0.83, dsn=2.7.0, status=sent (250 2.7.0 Ok, discarded, id=11084-17 - SPAM) Jun 15 17:58:35 bolt postfix/qmgr[10712]: 386791241F9: removed Jun 15 17:58:43 bolt postfix/smtpd[10836]: warning: 178.121.154.194: address not listed for hostname mm-194-154-121-178.dynamic.pppoe.mgts.by Jun 15 17:58:43 bolt postfix/smtpd[10836]: connect from unknown[178.121.154.194] Jun 15 17:58:45 bolt postfix/smtpd[10727]: connect from unknown[180.134.223.86] EDIT #2 Got some more info from the logs, this is a send request: mail.info.1:Jun 15 16:41:57 bolt amavis[5399]: (05399-06) Passed CLEAN, [110.139.48.64] [110.139.48.64] <paul@*****.se> -> <[email protected]>, Message-ID: <CHILKAT-MID-7c54ebcf-5501-de9b-f0b1-4f0234290d8d@HP-IRISH>, mail_id: 35l56Ramx6Nc, Hits: -2.941, size: 3329, queued_as: 2485770086, 136 ms mail.info.1:Jun 15 16:41:57 bolt postfix/smtp[4743]: 375C570082: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=4.8, delays=4.7/0/0/0.14, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=05399-06, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 2485770086) Which apparently got thrue. Any ideas how to restrict this?

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • fail2ban custom action to permanent ban IPs from China

    - by John Magnolia
    When a IP address gets banned how can I check if the banned IP address is from China. If yes, then add it to the permanent ban list. I have found this nice guide which write the banned IP to file. Reason: I am getting a lot of brute force attacks from China daily, thankfully fail2ban is helping restrict this although they appear to be getting worse and they are just changing their IP Address. Or even better would be if there was a maintained database of known hacker IP addresses. Example 1 Hi, The IP 60.169.78.77 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.77: % [whois.apnic.net node-7] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 2 Hi, The IP 60.169.78.81 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.81: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 3 Hi, The IP 222.133.244.99 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 222.133.244.99: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 222.133.244.96 - 222.133.244.127 netname: LCZFFHQ country: CN descr: liaochenggovermentfanghuoqiang admin-c: DS95-AP tech-c: DS95-AP status: ASSIGNED NON-PORTABLE changed: [email protected] 20060122 mnt-by: MAINT-CNCGROUP-SD source: APNIC route: 222.132.0.0/14 descr: CNC Group CHINA169 Shandong Province Network country: CN origin: AS4837 mnt-by: MAINT-CNCGROUP-RR changed: [email protected] 20060118 source: APNIC person: Data Communication Bureau Shandong nic-hdl: DS95-AP e-mail: [email protected] address: No.77 Jingsan Road,Jinan,Shandong,P.R.China phone: +86-531-6052611 fax-no: +86-531-6052414 country: CN changed: [email protected] 20050330 mnt-by: MAINT-CNCGROUP-SD source: APNIC Regards, Fail2Ban

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

< Previous Page | 36 37 38 39 40 41 42  | Next Page >