Search Results

Search found 1021 results on 41 pages for 'belongs to'.

Page 39/41 | < Previous Page | 35 36 37 38 39 40 41  | Next Page >

  • What are the Rails best practices for javascript templates in restful/resourceful controllers?

    - by numbers1311407
    First, 2 common (basic) approaches: # returning from some FoosController method respond_to do |format| # 1. render the javascript directly format.js { render :json => @foo.to_json } # 2. render the default template, say update.js.erb format.js { render } end # in update.js.erb $('#foo').html("<%= escape_javascript(render(@foo)) %>") These are obviously simple cases but I wanted to illustrate what I'm talking about. I believe that these are also the cases expected by the default responder in rails 3 (either the action-named default template or calling to_#{format} on the resource.) The Issues With 1, you have total flexibility on the view side with no worries about the template, but you have to manipulate the DOM directly via javascript. You lose access to helpers, partials, etc. With 2, you have partials and helpers at your disposal, but you're tied to the one template (by default at least). All your views that make JS calls to FoosController use the same template, which isn't exactly flexible. Three Other Approaches (none really satisfactory) 1.) Escape partials/helpers I need into javascript beforehand, then inserting them into the page after, using string replacement to tailor them to the results returned (subbing in name, id, etc). 2.) Put view logic in the templates. For example, looking for a particular DOM element and doing one thing if it exists, another if it does not. 3.) Put logic in the controller to render different templates. For example, in a polymorphic belongs to where update might be called for either comments/foo or posts/foo, rendering commnts/foos/update.js.erb versus posts/foos/update.js.erb. I've used all of these (and probably others I'm not thinking of). Often in the same app, which leads to confusing code. Are there best practices for this sort of thing? It seems like a common enough use-case that you'd want to call controllers via Ajax actions from different views and expect different things to happen (without having to do tedious things like escaping and string-replacing partials and helpers client side). Any thoughts?

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • GeoIP and Nginx

    - by JavierMartinez
    I have a nginx with geoip, but it is not working rightly. The issue is the next: Nginx are getting geodata from $_SERVER['REMOTE_ADDR'] instead of $_SERVER['HTTP_X_HAPROXY_IP'], which have the real client ip. So, the reported geodata belongs to my server ip instead of client ip. Does anybody where could be the error to fix it? Nginx version and compiled modules: nginx -V nginx version: nginx/1.2.3 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log- path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-cache-purge nginx site conf (frontend machine) server { root /var/www/storage; server_name ~^.*(\.)?mydomain.com$; if ($host ~ ^(.*)\.mydomain\.com$) { set $new_host $1.mydomain.com; } if ($host !~ ^(.*)\.mydomain\.com$) { set $new_host www.mydomain.com; } add_header Staging true; real_ip_header X-HAProxy-IP; set_real_ip_from 10.5.0.10/32; location /files { expires 30d; if ($uri !~ ^/files/([a-fA-F0-9]+)_(220|45)\.jpg$) { return 403; } rewrite ^/files/([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9]+)_(220|45)\.jpg$ /files/$1/$2/$3/$4/$1$2$3$4$5_$6.jpg break; try_files $uri @to_backend; } location /assets { if ($uri ~ ^/assets/r([a-zA-Z0-9]+[^/])(/(css|js|fonts)/.*)) { rewrite ^/assets/r([a-zA-Z0-9]+[^/])/(css|js|fonts)/(.*)$ /assets/$2/$3 break; } try_files $uri @to_backend; } location / { proxy_set_header Host $new_host; proxy_set_header X-HAProxy-IP $remote_addr; proxy_pass http://10.5.0.10:8080; } location @to_backend { proxy_set_header Host $new_host; proxy_pass http://10.5.0.10:8080; } } nginx.conf (backend machine) http{ ... ## # GeoIP Config ## geoip_country /etc/nginx/geoip/GeoIP.dat; # the country IP database geoip_city /etc/nginx/geoip/GeoLiteCity.dat; # the city IP database ... } fastcgi_params (backend machine) ### SET GEOIP Variables ### fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; fastcgi_param GEOIP_COUNTRY_CODE3 $geoip_country_code3; fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; fastcgi_param GEOIP_CITY_COUNTRY_CODE $geoip_city_country_code; fastcgi_param GEOIP_CITY_COUNTRY_CODE3 $geoip_city_country_code3; fastcgi_param GEOIP_CITY_COUNTRY_NAME $geoip_city_country_name; fastcgi_param GEOIP_REGION $geoip_region; fastcgi_param GEOIP_CITY $geoip_city; fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code; fastcgi_param GEOIP_CITY_CONTINENT_CODE $geoip_city_continent_code; fastcgi_param GEOIP_LATITUDE $geoip_latitude; fastcgi_param GEOIP_LONGITUDE $geoip_longitude; haproxy.conf (frontend machine) defaults log global option forwardfor option httpclose mode http retries 3 option redispatch maxconn 4096 contimeout 100000 clitimeout 100000 srvtimeout 100000 listen cluster_webs *:8080 mode http option tcpka option httpchk option httpclose option forwardfor balance roundrobin server backend-stage 10.5.0.11:80 weight 1 $_SERVER dump: http://paste.laravel.com/7dy Where 10.5.0.10 is frontend private ip and 10.5.0.11 backend private ip

    Read the article

  • Compiling examples for consuming the REST Endpoints for WCF Service using Agatha

    - by REA_ANDREW
    I recently made two contributions to the Agatha Project by Davy Brion over on Google Code, and one of the things I wanted to follow up with was a post showing examples and some, seemingly required tid bits.  The contributions which I made where: To support StructureMap To include REST (JSON and XML) support for the service contract The examples which I have made, I want to format them so they fit in with the current format of examples over on Agatha and hopefully create and submit a third patch which will include these examples to help others who wish to use these additions. Whilst building these examples for both XML and JSON I have learnt a couple of things which I feel are not really well documented, but are extremely good practice and once known make perfect sense.  I have chosen a real basic e-commerce context for my example Requests and Responses, and have also made use of the excellent tool AutoMapper, again on Google Code. Setting the scene I have followed the Pipes and Filters Pattern with the IQueryable interface on my Repository and exposed the following methods to query Products: IQueryable<Product> GetProducts(); IQueryable<Product> ByCategoryName(this IQueryable<Product> products, string categoryName) Product ByProductCode(this IQueryable<Product> products, String productCode) I have an interface for the IProductRepository but for the concrete implementation I have simply created a protected getter which populates a private List<Product> with 100 test products with random data.  Another good reason for following an interface based approach is that it will demonstrate usage of my first contribution which is the StructureMap support.  Finally the two Domain Objects I have made are Product and Category as shown below: public class Product { public String ProductCode { get; set; } public String Name { get; set; } public Decimal Price { get; set; } public Decimal Rrp { get; set; } public Category Category { get; set; } }   public class Category { public String Name { get; set; } }   Requirements for the REST Support One of the things which you will notice with Agatha is that you do not have to decorate your Request and Response objects with the WCF Service Model Attributes like DataContract, DataMember etc… Unfortunately from what I have seen, these are required if you want the same types to work with your REST endpoint.  I have not tried but I assume the same result can be achieved by simply decorating the same classes with the Serializable Attribute.  Without this the operation will fail. Another surprising thing I have found is that it did not work until I used the following Attribute parameters: Name Namespace e.g. [DataContract(Name = "GetProductsRequest", Namespace = "AgathaRestExample.Service.Requests")] public class GetProductsRequest : Request { }   Although I was surprised by this, things kind of explained themselves when I got round to figuring out the exact construct required for both the XML and the REST.  One of the things which you already know and are then reminded of is that each of your Requests and Responses ultimately inherit from an abstract base class respectively. This information needs to be represented in a way native to the format being used.  I have seen this in XML but I have not seen the format which is required for the JSON. JSON Consumer Example I have used JQuery to create the example and I simply want to make two requests to the server which as you will know with Agatha are transmitted inside an array to reduce the service calls.  I have also used a tool called json2 which is again over at Google Code simply to convert my JSON expression into its string format for transmission.  You will notice that I specify the type of Request I am using and the relevant Namespace it belongs to.  Also notice that the second request has a parameter so each of these two object are representing an abstract Request and the parameters of the object describe it. <script type="text/javascript"> var bodyContent = $.ajax({ url: "http://localhost:50348/service.svc/json/processjsonrequests", global: false, contentType: "application/json; charset=utf-8", type: "POST", processData: true, data: JSON.stringify([ { __type: "GetProductsRequest:AgathaRestExample.Service.Requests" }, { __type: "GetProductsByCategoryRequest:AgathaRestExample.Service.Requests", CategoryName: "Category1" } ]), dataType: "json", success: function(msg) { alert(msg); } }).responseText; </script>   XML Consumer Example For the XML Consumer example I have chosen to use a simple Console Application and make a WebRequest to the service using the XML as a request.  I have made a crude static method which simply reads from an XML File, replaces some value with a parameter and returns the formatted XML.  I say crude but it simply shows how XML Templates for each type of Request could be made and then have a wrapper utility in whatever language you use to combine the requests which are required.  The following XML is the same Request array as shown above but simply in the XML Format. <?xml version="1.0" encoding="utf-8" ?> <ArrayOfRequest xmlns="http://schemas.datacontract.org/2004/07/Agatha.Common" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Request i:type="a:GetProductsRequest" xmlns:a="AgathaRestExample.Service.Requests"/> <Request i:type="a:GetProductsByCategoryRequest" xmlns:a="AgathaRestExample.Service.Requests"> <a:CategoryName>{CategoryName}</a:CategoryName> </Request> </ArrayOfRequest>   It is funny because I remember submitting a question to StackOverflow asking whether there was a REST Client Generation tool similar to what Microsoft used for their RestStarterKit but which could be applied to existing services which have REST endpoints attached.  I could not find any but this is now definitely something which I am going to build, as I think it is extremely useful to have but also it should not be too difficult based on the information I now know about the above.  Finally I thought that the Strategy Pattern would lend itself really well to this type of thing so it can accommodate for different languages. I think that is about it, I have included the code for the example Console app which I made below incase anyone wants to have a mooch at the code.  As I said above I want to reformat these to fit in with the current examples over on the Agatha project, but also now thinking about it, make a Documentation Web method…{brain ticking} :-) Cheers for now and here is the final bit of code: static void Main(string[] args) { var request = WebRequest.Create("http://localhost:50348/service.svc/xml/processxmlrequests"); request.Method = "POST"; request.ContentType = "text/xml"; using(var writer = new StreamWriter(request.GetRequestStream())) { writer.WriteLine(GetExampleRequestsString("Category1")); } var response = request.GetResponse(); using(var reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } Console.ReadLine(); } static string GetExampleRequestsString(string categoryName) { var data = File.ReadAllText(Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "ExampleRequests.xml")); data = data.Replace("{CategoryName}", categoryName); return data; } }

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Multi-tenant ASP.NET MVC - Views

    - by zowens
    Part I – Introduction Part II – Foundation Part III – Controllers   So far we have covered the basic premise of tenants and how they will be delegated. Now comes a big issue with multi-tenancy, the views. In some applications, you will not have to override views for each tenant. However, one of my requirements is to add extra views (and controller actions) along with overriding views from the core structure. This presents a bit of a problem in locating views for each tenant request. I have chosen quite an opinionated approach at the present but will coming back to the “views” issue in a later post. What’s the deal? The path I’ve chosen is to use precompiled Spark views. I really love Spark View Engine and was planning on using it in my project anyways. However, I ran across a really neat aspect of the source when I was having a look under the hood. There’s an easy way to hook in embedded views from your project. There are solutions that provide this, but they implement a special Virtual Path Provider. While I think this is a great solution, I would rather just have Spark take care of the view resolution. The magic actually happens during the compilation of the views into a bin-deployable DLL. After the views are compiled, the are simply pulled out of the views DLL. Each tenant has its own views DLL that just has “.Views” appended after the assembly name as a convention. The list of reasons for this approach are quite long. The primary motivation is performance. I’ve had quite a few performance issues in the past and I would like to increase my application’s performance in any way that I can. My customized build of Spark removes insignificant whitespace from the HTML output so I can some some bandwidth and load time without having to deal with whitespace removal at runtime.   How to setup Tenants for the Host In the source, I’ve provided a single tenant as a sample (Sample1). This will serve as a template for subsequent tenants in your application. The first step is to add a “PostBuildStep” installer into the project. I’ve defined one in the source that will eventually change as we focus more on the construction of dependency containers. The next step is to tell the project to run the installer and copy the DLL output to a folder in the host that will pick up as a tenant. Here’s the code that will achieve it (this belongs in Post-build event command line field in the Build Events tab of settings) %systemroot%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)" copy /Y "$(TargetDir)$(TargetName)*.dll" "$(SolutionDir)Web\Tenants\" copy /Y "$(TargetDir)$(TargetName)*.pdb" "$(SolutionDir)Web\Tenants\" The DLLs with a name starting with the target assembly name will be copied to the “Tenants” folder in the web project. This means something like MultiTenancy.Tenants.Sample1.dll and MultiTenancy.Tenants.Sample1.Views.dll will both be copied along with the debug symbols. This is probably the simplest way to go about this, but it is a tad inflexible. For example, what if you have dependencies? The preferred method would probably be to use IL Merge to merge your dependencies with your target DLL. This would have to be added in the build events. Another way to achieve that would be to simply bypass Visual Studio events and use MSBuild.   I also got a question about how I was setting up the controller factory. Here’s the basics on how I’m setting up tenants inside the host (Global.asax) protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // create a container just to pull in tenants var topContainer = new Container(); topContainer.Configure(config => { config.Scan(scanner => { scanner.AssembliesFromPath(Path.Combine(Server.MapPath("~/"), "Tenants")); scanner.AddAllTypesOf<IApplicationTenant>(); }); }); // create selectors var tenantSelector = new DefaultTenantSelector(topContainer.GetAllInstances<IApplicationTenant>()); var containerSelector = new TenantContainerResolver(tenantSelector); // clear view engines, we don't want anything other than spark ViewEngines.Engines.Clear(); // set view engine ViewEngines.Engines.Add(new TenantViewEngine(tenantSelector)); // set controller factory ControllerBuilder.Current.SetControllerFactory(new ContainerControllerFactory(containerSelector)); } The code to setup the tenants isn’t actually that hard. I’m utilizing assembly scanners in StructureMap as a simple way to pull in DLLs that are not in the AppDomain. Remember that there is a dependency on the host in the tenants and a tenant cannot simply be referenced by a host because of circular dependencies.   Tenant View Engine TenantViewEngine is a simple delegator to the tenant’s specified view engine. You might have noticed that a tenant has to define a view engine. public interface IApplicationTenant { .... IViewEngine ViewEngine { get; } } The trick comes in specifying the view engine on the tenant side. Here’s some of the code that will pull views from the DLL. protected virtual IViewEngine DetermineViewEngine() { var factory = new SparkViewFactory(); var file = GetType().Assembly.CodeBase.Without("file:///").Replace(".dll", ".Views.dll").Replace('/', '\\'); var assembly = Assembly.LoadFile(file); factory.Engine.LoadBatchCompilation(assembly); return factory; } This code resides in an abstract Tenant where the fields are setup in the constructor. This method (inside the abstract class) will load the Views assembly and load the compilation into Spark’s “Descriptors” that will be used to determine views. There is some trickery on determining the file location… but it works just fine.   Up Next There’s just a few big things left such as StructureMap configuring controllers with a convention instead of specifying types directly with container construction and content resolution. I will also try to find a way to use the Web Forms View Engine in a multi-tenant way we achieved with the Spark View Engine without using a virtual path provider. I will probably not use the Web Forms View Engine personally, but I’m sure some people would prefer using WebForms because of the maturity of the engine. As always, I love to take questions by email or on twitter. Suggestions are always welcome as well! (Oh, and here’s another link to the source code).

    Read the article

  • Sharing the same `ssh-agent` among multiple login sessions

    - by intuited
    Is there a convenient way to ensure that all logins from a given user (ie me) use the same ssh-agent? I hacked out a script to make this work most of the time, but I suspected all along that there was some way to do it that I had just missed. Additionally, since that time there have been amazing advances in computing technology, like for example this website. So the goal here is that whenever I log in to the box, regardless of whether it's via SSH, or in a graphical session started from gdm/kdm/etc, or at a console: if my username does not currently have an ssh-agent running, one is started, the environment variables exported, and ssh-add called. otherwise, the existing agent's coordinates are exported in the login session's environment variables. This facility is especially valuable when the box in question is used as a relay point when sshing into a third box. In this case it avoids having to type in the private key's passphrase every time you ssh in and then want to, for example, do git push or something. The script given below does this mostly reliably, although it botched recently when X crashed and I then started another graphical session. There might have been other screwiness going on in that instance. Here's my bad-is-good script. I source this from my .bashrc. # ssh-agent-procure.bash # v0.6.4 # ensures that all shells sourcing this file in profile/rc scripts use the same ssh-agent. # copyright me, now; licensed under the DWTFYWT license. mkdir -p "$HOME/etc/ssh"; function ssh-procure-launch-agent { eval `ssh-agent -s -a ~/etc/ssh/ssh-agent-socket`; ssh-add; } if [ ! $SSH_AGENT_PID ]; then if [ -e ~/etc/ssh/ssh-agent-socket ] ; then SSH_AGENT_PID=`ps -fC ssh-agent |grep 'etc/ssh/ssh-agent-socket' |sed -r 's/^\S+\s+(\S+).*$/\1/'`; if [[ $SSH_AGENT_PID =~ [0-9]+ ]]; then # in this case the agent has already been launched and we are just attaching to it. ##++ It should check that this pid is actually active & belongs to an ssh instance export SSH_AGENT_PID; SSH_AUTH_SOCK=~/etc/ssh/ssh-agent-socket; export SSH_AUTH_SOCK; else # in this case there is no agent running, so the socket file is left over from a graceless agent termination. rm ~/etc/ssh/ssh-agent-socket; ssh-procure-launch-agent; fi; else ssh-procure-launch-agent; fi; fi; Please tell me there's a better way to do this. Also please don't nitpick the inconsistencies/gaffes ( eg putting var stuff in etc ); I wrote this a while ago and have since learned many things.

    Read the article

  • Segfault when iterating over a map<string, string> and drawing its contents using SDL_TTF

    - by Michael Stahre
    I'm not entirely sure this question belongs on gamedev.stackexchange, but I'm technically working on a game and working with SDL, so it might not be entirely offtopic. I've written a class called DebugText. The point of the class is to have a nice way of printing values of variables to the game screen. The idea is to call SetDebugText() with the variables in question every time they change or, as is currently the case, every time the game's Update() is called. The issue is that when iterating over the map that contains my variables and their latest updated values, I get segfaults. See the comments in DrawDebugText() below, it specifies where the error happens. I've tried splitting the calls to it-first and it-second into separate lines and found that the problem doesn't always happen when calling it-first. It alters between it-first and it-second. I can't find a pattern. It doesn't fail on every call to DrawDebugText() either. It might fail on the third time DrawDebugText() is called, or it might fail on the fourth. Class header: #ifndef CLIENT_DEBUGTEXT_H #define CLIENT_DEBUGTEXT_H #include <Map> #include <Math.h> #include <sstream> #include <SDL.h> #include <SDL_ttf.h> #include "vector2.h" using std::string; using std::stringstream; using std::map; using std::pair; using game::Vector2; namespace game { class DebugText { private: TTF_Font* debug_text_font; map<string, string>* debug_text_list; public: void SetDebugText(string var, bool value); void SetDebugText(string var, float value); void SetDebugText(string var, int value); void SetDebugText(string var, Vector2 value); void SetDebugText(string var, string value); int DrawDebugText(SDL_Surface*, SDL_Rect*); void InitDebugText(); void Clear(); }; } #endif Class source file: #include "debugtext.h" namespace game { // Copypasta function for handling the toString conversion template <class T> inline string to_string (const T& t) { stringstream ss (stringstream::in | stringstream::out); ss << t; return ss.str(); } // Initializes SDL_TTF and sets its font void DebugText::InitDebugText() { if(TTF_WasInit()) TTF_Quit(); TTF_Init(); debug_text_font = TTF_OpenFont("LiberationSans-Regular.ttf", 16); TTF_SetFontStyle(debug_text_font, TTF_STYLE_NORMAL); } // Iterates over the current debug_text_list and draws every element on the screen. // After drawing with SDL you need to get a rect specifying the area on the screen that was changed and tell SDL that this part of the screen needs to be updated. this is done in the game's Draw() function // This function sets rects_to_update to the new list of rects provided by all of the surfaces and returns the number of rects in the list. These two parameters are used in Draw() when calling on SDL_UpdateRects(), which takes an SDL_Rect* and a list length int DebugText::DrawDebugText(SDL_Surface* screen, SDL_Rect* rects_to_update) { if(debug_text_list == NULL) return 0; if(!TTF_WasInit()) InitDebugText(); rects_to_update = NULL; // Specifying the font color SDL_Color font_color = {0xff, 0x00, 0x00, 0x00}; // r, g, b, unused int row_count = 0; string line; // The iterator variable map<string, string>::iterator it; // Gets the iterator and iterates over it for(it = debug_text_list->begin(); it != debug_text_list->end(); it++) { // Takes the first value (the name of the variable) and the second value (the value of the parameter in string form) //---------THIS LINE GIVES ME SEGFAULTS----- line = it->first + ": " + it->second; //------------------------------------------ // Creates a surface with the text on it that in turn can be rendered to the screen itself later SDL_Surface* debug_surface = TTF_RenderText_Solid(debug_text_font, line.c_str(), font_color); if(debug_surface == NULL) { // A standard check for errors fprintf(stderr, "Error: %s", TTF_GetError()); return NULL; } else { // If SDL_TTF did its job right, then we now set a destination rect row_count++; SDL_Rect dstrect = {5, 5, 0, 0}; // x, y, w, h dstrect.x = 20; dstrect.y = 20*row_count; // Draws the surface with the text on it to the screen int res = SDL_BlitSurface(debug_surface,NULL,screen,&dstrect); if(res != 0) { //Just an error check fprintf(stderr, "Error: %s", SDL_GetError()); return NULL; } // Creates a new rect to specify the area that needs to be updated with SDL_Rect* new_rect_to_update = (SDL_Rect*) malloc(sizeof(SDL_Rect)); new_rect_to_update->h = debug_surface->h; new_rect_to_update->w = debug_surface->w; new_rect_to_update->x = dstrect.x; new_rect_to_update->y = dstrect.y; // Just freeing the surface since it isn't necessary anymore SDL_FreeSurface(debug_surface); // Creates a new list of rects with room for the new rect SDL_Rect* newtemp = (SDL_Rect*) malloc(row_count*sizeof(SDL_Rect)); // Copies the data from the old list of rects to the new one memcpy(newtemp, rects_to_update, (row_count-1)*sizeof(SDL_Rect)); // Adds the new rect to the new list newtemp[row_count-1] = *new_rect_to_update; // Frees the memory used by the old list free(rects_to_update); // And finally redirects the pointer to the old list to the new list rects_to_update = newtemp; newtemp = NULL; } } // When the entire map has been iterated over, return the number of lines that were drawn, ie. the number of rects in the returned rect list return row_count; } // The SetDebugText used by all the SetDebugText overloads // Takes two strings, inserts them into the map as a pair void DebugText::SetDebugText(string var, string value) { if (debug_text_list == NULL) { debug_text_list = new map<string, string>(); } debug_text_list->erase(var); debug_text_list->insert(pair<string, string>(var, value)); } // Writes the bool to a string and calls SetDebugText(string, string) void DebugText::SetDebugText(string var, bool value) { string result; if (value) result = "True"; else result = "False"; SetDebugText(var, result); } // Does the same thing, but uses to_string() to convert the float void DebugText::SetDebugText(string var, float value) { SetDebugText(var, to_string(value)); } // Same as above, but int void DebugText::SetDebugText(string var, int value) { SetDebugText(var, to_string(value)); } // Vector2 is a struct of my own making. It contains the two float vars x and y void DebugText::SetDebugText(string var, Vector2 value) { SetDebugText(var + ".x", to_string(value.x)); SetDebugText(var + ".y", to_string(value.y)); } // Empties the list. I don't actually use this in my code. Shame on me for writing something I don't use. void DebugText::Clear() { if(debug_text_list != NULL) debug_text_list->clear(); } }

    Read the article

  • Auth-Type :- Reject in RADIUS users file matches inner tunnel request but sends Access-Accept

    - by mgorven
    I have WPA2 802.11x EAP authentication setup using FreeRADIUS 2.1.8 on Ubuntu 10.04.4 talking to OpenLDAP, and can successfully authenticate using PEAP/MSCHAPv2, TTLS/MSCHAPv2 and TTLS/PAP (both via the AP and using eapol_test). I am now trying to restrict access to specific SSIDs based on the LDAP groups which the user belongs to. I have configured group membership checking in /etc/freeradius/modules/ldap like so: groupname_attribute = cn groupmembership_filter = "(|(&(objectClass=posixGroup)(memberUid=%{User-Name}))(&(objectClass=posixGroup)(uniquemember=%{User-Name})))" and I have configured extraction of the SSID from Called-Station-Id into Called-Station-SSID based on the Mac Auth wiki page. In /etc/freeradius/eap.conf I have enabled copying attributes from the outer tunnel into the inner tunnel, and usage of the inner tunnel response in the outer tunnel (for both PEAP and TTLS). I had the same behaviour before changing these options however. copy_request_to_tunnel = yes use_tunneled_reply = yes I'm running eapol_test like this to test the setup: eapol_test -c peap-mschapv2.conf -a 172.16.0.16 -s testing123 -N 30:s:01-23-45-67-89-01:Example-EAP with the following peap-mschapv2.conf file: network={ ssid="Example-EAP" key_mgmt=WPA-EAP eap=PEAP identity="mgorven" anonymous_identity="anonymous" password="foobar" phase2="autheap=MSCHAPV2" } With the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" and running freeradius-Xx, I can see that the LDAP group retrieval works, and that the SSID is extracted. Debug: [ldap] performing search in dc=example,dc=com, with filter (&(cn=employees)(|(&(objectClass=posixGroup)(memberUid=mgorven))(&(objectClass=posixGroup)(uniquemember=mgorven)))) Debug: rlm_ldap::ldap_groupcmp: User found in group employees ... Info: expand: %{7} -> Example-EAP Next I try to only allow access to users in the employees group (regardless of SSID), so I put the following in /etc/freeradius/users: DEFAULT Ldap-Group == "employees" DEFAULT Auth-Type := Reject But this immediately rejects the Access-Request in the outer tunnel because the anonymous user is not in the employees group. So I modify it to only match inner tunnel requests like so: DEFAULT Ldap-Group == "employees" DEFAULT FreeRADIUS-Proxied-To == "127.0.0.1" Auth-Type := Reject, Reply-Message = "User does not belong to any groups which may access this SSID." Now users which are in the employees group are authenticated, but so are users which are not in the employees group. I see the reject entry being matched, and the Reply-Message is set, but the client receives an Access-Accept. Debug: rlm_ldap::ldap_groupcmp: Group employees not found or user is not a member. Info: [files] users: Matched entry DEFAULT at line 209 Info: ++[files] returns ok ... Auth: Login OK: [mgorven] (from client test port 0 cli 02-00-00-00-00-01 via TLS tunnel) Info: WARNING: Empty section. Using default return values. ... Info: [peap] Got tunneled reply code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Got tunneled reply RADIUS code 2 Auth-Type := Reject Reply-Message = "User does not belong to any groups which may access this SSID." ... Info: [peap] Tunneled authentication was successful. Info: [peap] SUCCESS Info: [peap] Saving tunneled attributes for later ... Sending Access-Accept of id 11 to 172.16.2.44 port 60746 Reply-Message = "User does not belong to any groups which may access this SSID." User-Name = "mgorven" and eapol_test reports: RADIUS message: code=2 (Access-Accept) identifier=11 length=233 Attribute 18 (Reply-Message) length=64 Value: 'User does not belong to any groups which may access this SSID.' Attribute 1 (User-Name) length=9 Value: 'mgorven' ... SUCCESS Why isn't the request being rejected, and is this the right way to implement this?

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • ASP.NET WebAPI Security 5: JavaScript Clients

    - by Your DisplayName here!
    All samples I showed in my last post were in C#. Christian contributed another client sample in some strange language that is supposed to work well in browsers ;) JavaScript client scenarios There are two fundamental scenarios when it comes to JavaScript clients. The most common is probably that the JS code is originating from the same web application that also contains the web APIs. Think a web page that does some AJAX style callbacks to an API that belongs to that web app – Validation, data access etc. come to mind. Single page apps often fall in that category. The good news here is that this scenario just works. The typical course of events is that the user first logs on to the web application – which will result in an authentication cookie of some sort. That cookie will get round-tripped with your AJAX calls and ASP.NET does its magic to establish a client identity context. Since WebAPI inherits the security context from its (web) host, the client identity is also available here. The other fundamental scenario is JavaScript code *not* running in the context of the WebAPI hosting application. This is more or less just like a normal desktop client – either running in the browser, or if you think of Windows 8 Metro style apps as “real” desktop apps. In that scenario we do exactly the same as the samples did in my last post – obtain a token, then use it to call the service. Obtaining a token from IdentityServer’s resource owner credential OAuth2 endpoint could look like this: thinktectureIdentityModel.BrokeredAuthentication = function (stsEndpointAddress, scope) {     this.stsEndpointAddress = stsEndpointAddress;     this.scope = scope; }; thinktectureIdentityModel.BrokeredAuthentication.prototype = function () {     getIdpToken = function (un, pw, callback) {         $.ajax({             type: 'POST',             cache: false,             url: this.stsEndpointAddress,             data: { grant_type: "password", username: un, password: pw, scope: this.scope },             success: function (result) {                 callback(result.access_token);             },             error: function (error) {                 if (error.status == 401) {                     alert('Unauthorized');                 }                 else {                     alert('Error calling STS: ' + error.responseText);                 }             }         });     };     createAuthenticationHeader = function (token) {         var tok = 'IdSrv ' + token;         return tok;     };     return {         getIdpToken: getIdpToken,         createAuthenticationHeader: createAuthenticationHeader     }; } (); Calling the service with the requested token could look like this: function getIdentityClaimsFromService() {     authHeader = authN.createAuthenticationHeader(token);     $.ajax({         type: 'GET',         cache: false,         url: serviceEndpoint,         beforeSend: function (req) {             req.setRequestHeader('Authorization', authHeader);         },         success: function (result) {              $.each(result.Claims, function (key, val) {                 $('#claims').append($('<li>' + val.Value + '</li>'))             });         },         error: function (error) {             alert('Error: ' + error.responseText);         }     }); I updated the github repository, you can can play around with the code yourself.

    Read the article

  • How to configure multiple iSCSI Portal Groups on a EqualLogic PS6100?

    - by kce
    I am working on a migration from a VMware vSphere environment to a Hyper-V Cluster utilizing Windows Server 2012 R2. The setup is pretty small, an EqualLogic PS6100e and two Dell PowerConnect 5424 switches and handful of R710s and R620s. The SAN was configured as a non-RFC1918 network that is not assigned to our organization and since I am working on building a new virtualization environment I figured that this would be an appropriate time to do a subnet migration. I configured a separate VLAN and subnet on the switches and the two previously unused NICs on the PS6100's controllers. At this time I only have a single Hyper-V host cabled in but I can successfully ping the PS6100 from the host. From the PS6100 I can ping each of the four NICs that currently on the storage network. I cannot connect the Microsoft iSCSI Initiator to the Target. I have successfully added the Target Portals (the IP addresses of PS6100 NICs) and the Targets are discovered but listed as inactive. If I try to Connect to them I get the following error, "Log onto Target - Connection Failed" and ISCSIPrt 1 and 70 events are recorded in the Event Log. I have verified that access control to the volume is not the problem by temporarily disabling it. I suspect the problem is with the Portal Group IP address which is still listed as Group Address of old subnet (I know, I know I might be committing the sin of the X/Y problem but everything else looks good): RFC3720 has this to say about Network Portal and Portal Groups: Network Portal: The Network Portal is a component of a Network Entity that has a TCP/IP network address and that may be used by an iSCSI Node within that Network Entity for the connection(s) within one of its iSCSI sessions. A Network Portal in an initiator is identified by its IP address. A Network Portal in a target is identified by its IP address and its listening TCP port. Portal Groups: iSCSI supports multiple connections within the same session; some implementations will have the ability to combine connections in a session across multiple Network Portals. A Portal Group defines a set of Network Portals within an iSCSI Network Entity that collectively supports the capability of coordinating a session with connections spanning these portals. Not all Network Portals within a Portal Group need participate in every session connected through that Portal Group. One or more Portal Groups may provide access to an iSCSI Node. Each Network Portal, as utilized by a given iSCSI Node, belongs to exactly one portal group within that node. The EqualLogic Group Manager documentation has this to say about the Group IP Address: You use the group IP address as the iSCSI discovery address when connecting initiators to iSCSI targets in the group. If you modify the group IP address, you might need to change your initiator configuration to use the new discovery address Changing the group IP address disconnects any iSCSI connections to the group and any administrators logged in to the group through the group IP address. Which sounds equivalent to me (I am following up with support to confirm). I think a reasonable explanation at this point is that the Initiator can't complete the connection to the Target because the Group IP Address / Network Portal is on a different subnet. I really want to avoid a cutover and would prefer to run both subnets side-by-side until I can install and configure each Hyper-V host. Question/s: Is my assessment at all reasonable? Is it possible to configure multiple Group IP Addresses on the EqualLogic PS6100? I don't want to just change it as it will disconnect the remaining ESXi hosts. Am I just Doing It Wrong(TM)?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • "dig +trace fqdn" and "dig fqdn" do not give the same result on a LAN with windows DNS server, why?

    - by Sulliwane
    in my company LAN I have a Ubuntu 14.04 server running in Virtualbox (as guest) on a Windows 7 (the host) with network interface bridged (so the Ubuntu server belongs to the LAN, with its ip: 192.168.1.85). I have a website on this server: mywebsite.com The gateway for the LAN to the internet is 192.168.1.1 (Cisco 1841)--188.188.188.254 as public IP. There is a Windows 2008 server that acts as DNS server and DHCP server on the LAN. I added a Forward zone "mywebsite.com" with A record - 192.168.1.85. Outside the LAN, mywebsite.com has public Dns records that point on the Cisco 1841 public IP (188.188.188.254) Now when I ping mywebsite.com from the lan, I quickly get 192.168.1.85. But when I'm connecting through the browser on the clients, it's not always fast. So I'm wondering: Are my requests really/directly resolved and forwarded to 192.168.1.85, OR are they sent out of the LAN, and then forwarded back to the CISCO public 188.188.188.254:80 and NAT to the Ubuntu server before being served ??? To try to answer this question, I looked for tracking the DNS request from my linux client on the LAN: v@v-ss9:~$ dig mywebsite.com ; <<>> DiG 9.9.5-3-Ubuntu <<>> mywebsite.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24850 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4000 ;; QUESTION SECTION: ;mywebsite.com. IN A ;; ANSWER SECTION: mywebsite.com. 3600 IN A 192.168.1.85 ;; Query time: 1 msec ;; SERVER: 127.0.1.1#53(127.0.1.1) ;; WHEN: Fri Aug 22 09:50:16 CST 2014 ;; MSG SIZE rcvd: 66 This answer looks right: 192.168.1.85. But then look at this: v@v-ss9:~$ dig +trace mywebsite.com ; <<>> DiG 9.9.5-3-Ubuntu <<>> +trace mywebsite.com ;; global options: +cmd . 12955 IN NS h.gtld-servers.net. . 12955 IN NS g.gtld-servers.net. . 12955 IN NS m.gtld-servers.net. . 12955 IN NS i.gtld-servers.net. . 12955 IN NS l.gtld-servers.net. . 12955 IN NS k.gtld-servers.net. . 12955 IN NS j.gtld-servers.net. . 12955 IN NS d.gtld-servers.net. . 12955 IN NS b.gtld-servers.net. . 12955 IN NS c.gtld-servers.net. . 12955 IN NS a.gtld-servers.net. . 12955 IN NS e.gtld-servers.net. . 12955 IN NS f.gtld-servers.net. ;; Received 516 bytes from 127.0.1.1#53(127.0.1.1) in 18 ms mywebsite.com. 172800 IN NS ns3.rmi.fr. mywebsite.com. 172800 IN NS ns4.rmi.fr. CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN NSEC3 1 1 0 - CK0QFMDQRCSRU0651QLVA1JQB21IF7UR NS SOA RRSIG DNSKEY NSEC3PARAM CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN RRSIG NSEC3 8 2 86400 20140825045016 20140818034016 6122 com. Imq8K9xlvFXlB4IjUkdxOc5YHoTEhqSQUlRSJ9QCIhd9wzGpWJ54AfVf WJ0SUKThalpzqS0cXdLGtNmuYgqLfwUMjpUlT4c+zJyx7I4QMPLImQZh Ov0xy3mUr7dLlymAJYGs9dLI2IaheLvpKTBwaV1gAvo8QEkU8VRiJ7gW 9dk= U0PIA23FHMVPTKSDHC9PJ1BEA9SIB65R.com. 86400 IN NSEC3 1 1 0 - U0PL33R61V6TCCPBS1171PROP57ASRD9 NS DS RRSIG U0PIA23FHMVPTKSDHC9PJ1BEA9SIB65R.com. 86400 IN RRSIG NSEC3 8 2 86400 20140825043502 20140818032502 6122 com. qsC5sJbwklao+OedCHpcYo56aQaY0N+7peKmPu8szvjAQoJFRWyuDfAh Nw/gvHXEMzG7tYLriQGVfsiK8GZdPXyG4Ghe1MNN4jOZnSahkT5LjlqL 5QyGC0QiClRMPDAYjUOFGQDkjOJcJYvTNkEyXC2BEpfLI5SwCbYqwqg3 RkE= ;; Received 585 bytes from 192.41.162.30#53(l.gtld-servers.net) in 297 ms mywebsite.com. 86400 IN A 188.188.188.254 mywebsite.com. 86400 IN NS ns3.rmi.fr. mywebsite.com. 86400 IN NS ns4.rmi.fr. ;; Received 204 bytes from 212.51.161.18#53(ns3.rmi.fr) in 310 ms Here I get my CISCO public IP 188.188.188.254!!! Is it normal? How to know if my browser (from the LAN) is really directly communicating with 192.168.1.85 when using mywebsite.com? Thank you for your help.

    Read the article

  • Mac OS X Server Open Directory does not push Software Update settings to clients

    - by joxl
    I have an Xserve G5 running Mac OS X Server 10.5.8 configured as an Open Directory master. I have also enabled and configured Software Update service on the machine. The SUS is configured to serve Tiger, Leopard and Snow Leopard clients (see http://discussions.apple.com/message.jspa?messageID=10297359#10297359) The clients bound to the OD are a variety of Mac's running OS X 10.4, 10.5 or 10.6. In Workgroup Manager, I have created 3 machine groups for each client OS. Each group is configured with a custom SUS URL, and the managed client computers are members accordingly (see http://discussions.apple.com/thread.jspa?messageID=10493154#10493154) My problem is that the server pushes the SUS settings to some of the client machines, but not all. When I first configured all this stuff on the server (a few weeks ago) I was closely monitoring a few of the client machines to confirm that they received the custom settings. I noticed that some of the clients (10.4/5/6 alike) seemed to get the settings immediately, others didn't show the new settings until after a reboot. As I said, results are mixed across OS's, but some clients will not "sync" at all. My immediate thought was to unbind/rebind the problematic machines. I did this on several client computers with no success. For example, today I was working on one of the Tiger clients. I noticed it was not pointed at my local SUS, so I checked the OD binding; it was fine. Just to be sure I unbound the machine. Next, I checked WM and confirmed the computer record was gone. I noticed the machine group still had a residual (broken?) member from the unbound client; I manually removed this. Finally, I re-bound the client to OD and re-added the machine to it's correct group in WM. Unfortunately, the client still pings apple's SUS for updates. Just to play it safe I rebooted the client, but to no avail, it will not see my local SUS. To confirm that there is nothing wrong with the server, or the client's connection to it, forcefully pointed the machine at my SUS: sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate CatalogURL "$LOCAL_SUS_URL" and the machine successfully updated off my local server. Great, successful updates, but problem not solved. I've done exhaustive reading on discussions.apple.com (not saying I read everything, I'm just saying I have read a lot) without a good answer. The discouraging thing is that a lot of OD problems I've read about only result in the sysadmin completely reinstalling the server, or OD, or some other similarly heavy-handed operation. At this point, I am not willing to go that route. I still have hope that I can find the reason for this flaky behavior. If anyone can point me in a helpful direction it would be much appreciated. EDIT: Indeed, some files are being pushed to the client: # from client machine: $ sudo find /Library -type f -name com.apple.SoftwareUpdate.plist /Library/Managed Preferences/com.apple.SoftwareUpdate.plist /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist /Library/Preferences/com.apple.SoftwareUpdate.plist A few weeks ago, prior to my (previously mentioned) modifications, the SUS was still running "stock". Which meant it could not serve SL (10.6) machines. At that time, the Software Update settings were setup in WM under User Groups. This didn't make any sense because some users work on multiple machines with different OS's. Before creating Machine Groups in WM, I deleted all the SU settings from the User Group Preferences. This just makes the whole thing more confusing, because when I see a file here: /Library/Managed Preferences/username/com.apple.SoftwareUpdate.plist I assume it's still remaining from the "old" settings, because I wouldn't think a Machine Setting belongs there. Despite all the com.apple.SoftwareUpdate.plist hanging around under the Managed Preferences, why does the client machine still call home to Apple and not my SUS? # on client machine: $ date Tue Jan 25 17:01:46 EST 2011 $ softwareupdate --list Software Update Tool Copyright 2002-2005 Apple No new software available. switch terminals... # on server: $ tail -n1 /var/log/swupd/swupd_access_log 10.x.x.x - - [25/Jan/2011:15:54:29 -0500] XXXX POST "/cgi-bin/SoftwareUpdateServerStats" 200 13 ... Notice the date of the client softwareupdate and the latest access to the SUS server; the server never heard a peep from that client.

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • Quartz.Net Writing your first Hello World Job

    - by Tarun Arora
    In this blog post I’ll be covering, 01: A few things to consider before you should schedule a Job using Quartz.Net 02: Setting up your solution to use Quartz.Net API 03: Quartz.Net configuration 04: Writing & scheduling a hello world job with Quartz.Net If you are new to Quartz.Net I would recommend going through, A brief introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service A few things to consider before you should schedule a Job using Quartz.Net - An instance of the scheduler service - A trigger - And last but not the least a job For example, if I wanted to schedule a script to run on the server, I should be jotting down answers to the below questions, a. Considering there are multiple machines set up with Quartz.Net windows service, how can I choose the instance of Quartz.Net where I want my script to be run b. What will trigger the execution of the job c. How often do I want the job to run d. Do I want the job to run right away or start after a delay or may be have the job start at a specific time e. What will happen to my job if Quartz.Net windows service is reset f. Do I want multiple instances of this job to run concurrently g. Can I pass parameters to the job being executed by Quartz.Net windows service Setting up your solution to use Quartz.Net API 1. Create a new C# Console Application project and call it “HelloWorldQuartzDotNet” and add a reference to Quartz.Net.dll. I use the NuGet Package Manager to add the reference. This can be done by right clicking references and choosing Manage NuGet packages, from the Nuget Package Manager choose Online from the left panel and in the search box on the right search for Quartz.Net. Click Install on the package “Quartz” (Screen shot below). 2. Right click the project and choose Add New Item. Add a new Interface and call it ‘IScheduledJob.cs’. Mark the Interface public and add the signature for Run. Your interface should look like below. namespace HelloWorldQuartzDotNet { public interface IScheduledJob { void Run(); } }   3. Right click the project and choose Add new Item. Add a class and call it ‘Scheduled Job’. Use this class to implement the interface ‘IscheduledJob.cs’. Look at the pseudo code in the implementation of the Run method. using System; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler // Define the Job to be scheduled // Associate a trigger with the Job // Assign the Job to the scheduler throw new NotImplementedException(); } } }   I’ll get into the implementation in more detail, but let’s look at the minimal configuration a sample configuration file for Quartz.Net service to work. Quartz.Net configuration In the App.Config file copy the below configuration <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> </configSections> <quartz> <add key="quartz.scheduler.instanceName" value="ServerScheduler" /> <add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" /> <add key="quartz.threadPool.threadCount" value="10" /> <add key="quartz.threadPool.threadPriority" value="2" /> <add key="quartz.jobStore.misfireThreshold" value="60000" /> <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" /> </quartz> </configuration>   As you can see in the configuration above, I have included the instance name of the quartz scheduler, the thread pool type, count and priority, the job store type has been defined as RAM. You have the option of configuring that to ADO.NET JOB store. More details here. Writing & scheduling a hello world job with Quartz.Net Once fully implemented the ScheduleJob.cs class should look like below. I’ll walk you through the details of the implementation… - GetScheduler() uses the name of the quartz.net and listens on localhost port 555 to try and connect to the quartz.net windows service. - Run() an attempt is made to start the scheduler in case it is in standby mode - I have defined a job “WriteHelloToConsole” (that’s the name of the job), this job belongs to the group “IT”. Think of group as a logical grouping feature. It helps you bucket jobs into groups. Quartz.Net gives you the ability to pause or delete all jobs in a group (We’ll look at that in some of the future posts). I have requested for recovery of this job in case the quartz.net service fails over to the other node in the cluster. The jobType is “HelloWorldJob”. This is the class that would be called to execute the job. More details on this below… - I have defined a trigger for my job. I have called the trigger “WriteHelloToConsole”. The Trigger works on the cron schedule “0 0/1 * 1/1 * ? *” which means fire the job once every minute. I would recommend that you look at www.cronmaker.com a free and great website to build and parse cron expressions. The trigger has a priority 1. So, if two jobs are run at the same time, this trigger will have high priority and will be run first. - Use the Job and Trigger to schedule the job. This method returns a datetime offeset. It is possible to see the next fire time for the job from this variable. using System.Collections.Specialized; using System.Configuration; using Quartz; using System; using Quartz.Impl; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler var schd = GetScheduler(); // Start the scheduler if its in standby if (!schd.IsStarted) schd.Start(); // Define the Job to be scheduled var job = JobBuilder.Create<HelloWorldJob>() .WithIdentity("WriteHelloToConsole", "IT") .RequestRecovery() .Build(); // Associate a trigger with the Job var trigger = (ICronTrigger)TriggerBuilder.Create() .WithIdentity("WriteHelloToConsole", "IT") .WithCronSchedule("0 0/1 * 1/1 * ? *") // visit http://www.cronmaker.com/ Queues the job every minute .WithPriority(1) .Build(); // Assign the Job to the scheduler var schedule = schd.ScheduleJob(job, trigger); Console.WriteLine("Job '{0}' scheduled for '{1}'", "", schedule.ToString("r")); } // Get an instance of the Quartz.Net scheduler private static IScheduler GetScheduler() { try { var properties = new NameValueCollection(); properties["quartz.scheduler.instanceName"] = "ServerScheduler"; // set remoting expoter properties["quartz.scheduler.proxy"] = "true"; properties["quartz.scheduler.proxy.address"] = string.Format("tcp://{0}:{1}/{2}", "localhost", "555", "QuartzScheduler"); // Get a reference to the scheduler var sf = new StdSchedulerFactory(properties); return sf.GetScheduler(); } catch (Exception ex) { Console.WriteLine("Scheduler not available: '{0}'", ex.Message); throw; } } } }   The above highlighted values have been taken from the Quartz.config file, this file is available in the Quartz.net server installation directory. Implementation of my HelloWorldJob Class below. The HelloWorldJob class gets called to execute the job “WriteHelloToConsole” using the once every minute trigger set up for this job. The HelloWorldJob is a class that implements the interface IJob. I’ll walk you through the details of the implementation… - context is passed to the method execute by the quartz.net scheduler service. This has everything you need to pull out the job, trigger specific information. - for example. I have pulled out the value of the jobKey name, the fire time and next fire time. using Quartz; using System; namespace HelloWorldQuartzDotNet { class HelloWorldJob : IJob { public void Execute(IJobExecutionContext context) { try { Console.WriteLine("Job {0} fired @ {1} next scheduled for {2}", context.JobDetail.Key, context.FireTimeUtc.Value.ToString("r"), context.NextFireTimeUtc.Value.ToString("r")); Console.WriteLine("Hello World!"); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   I’ll add a call to call the scheduler in the Main method in Program.cs using System; using System.Threading; namespace HelloWorldQuartzDotNet { class Program { static void Main(string[] args) { try { var sj = new ScheduledJob(); sj.Run(); Thread.Sleep(10000 * 10000); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   This was third in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to pass parameters to the scheduled task scheduled on Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • AT91SAM7X512's SPI peripheral gets disabled on write to SPI_TDR

    - by Dor
    My AT91SAM7X512's SPI peripheral gets disabled on the X time (X varies) that I write to SPI_TDR. As a result, the processor hangs on the while loop that checks the TDRE flag in SPI_SR. This while loop is located in the function SPI_Write() that belongs to the software package/library provided by ATMEL. The problem occurs arbitrarily - sometimes everything works OK and sometimes it fails on repeated attempts (attemp = downloading the same binary to the MCU and running the program). Configurations are (defined in the order of writing): SPI_MR: MSTR = 1 PS = 0 PCSDEC = 0 PCS = 0111 DLYBCS = 0 SPI_CSR[3]: CPOL = 0 NCPHA = 1 CSAAT = 0 BITS = 0000 SCBR = 20 DLYBS = 0 DLYBCT = 0 SPI_CR: SPIEN = 1 After setting the configurations, the code verifies that the SPI is enabled, by checking the SPIENS flag. I perform a transmission of bytes as follows: const short int dataSize = 5; // Filling array with random data unsigned char data[dataSize] = {0xA5, 0x34, 0x12, 0x00, 0xFF}; short int i = 0; volatile unsigned short dummyRead; SetCS3(); // NPCS3 == PIOA15 while(i-- < dataSize) { mySPI_Write(data[i]); while((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); dummyRead = SPI_Read(); // SPI_Read() from Atmel's library } ClearCS3(); /**********************************/ void mySPI_Write(unsigned char data) { while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TXEMPTY) == 0); AT91C_BASE_SPI0->SPI_TDR = data; while ((AT91C_BASE_SPI0->SPI_SR & AT91C_SPI_TDRE) == 0); // <-- This is where // the processor hangs, because that the SPI peripheral is disabled // (SPIENS equals 0), which makes TDRE equal to 0 forever. } Questions: What's causing the SPI peripheral to become disabled on the write to SPI_TDR? Should I un-comment the line in SPI_Write() that reads the SPI_RDR register? Means, the 4th line in the following code: (The 4th line is originally marked as a comment) void SPI_Write(AT91S_SPI *spi, unsigned int npcs, unsigned short data) { // Discard contents of RDR register //volatile unsigned int discard = spi->SPI_RDR; /* Send data */ while ((spi->SPI_SR & AT91C_SPI_TXEMPTY) == 0); spi->SPI_TDR = data | SPI_PCS(npcs); while ((spi->SPI_SR & AT91C_SPI_TDRE) == 0); } Is there something wrong with the code above that transmits 5 bytes of data? Please note: The NPCS line num. 3 is a GPIO line (means, in PIO mode), and is not controlled by the SPI controller. I'm controlling this line by myself in the code, by de/asserting the ChipSelect#3 (NPCS3) pin when needed. The reason that I'm doing so is because that problems occurred while trying to let the SPI controller to control this pin. I didn't reset the SPI peripheral twice, because that the errata tells to reset it twice only if I perform a reset - which I don't do. Quoting the errata: If a software reset (SWRST in the SPI Control Register) is performed, the SPI may not work properly (the clock is enabled before the chip select.) Problem Fix/Workaround The SPI Control Register field, SWRST (Software Reset) needs to be written twice to be cor- rectly set. I noticed that sometimes, if I put a delay before the write to the SPI_TDR register (in SPI_Write()), then the code works perfectly and the communications succeeds. Useful links: AT91SAM7X Series Preliminary.pdf ATMEL software package/library spi.c from Atmel's library spi.h from Atmel's library An example of initializing the SPI and performing a transfer of 5 bytes is highly appreciated and helpful.

    Read the article

  • CodePlex Daily Summary for Sunday, November 04, 2012

    CodePlex Daily Summary for Sunday, November 04, 2012Popular ReleasesZXMAK2: Version 2.6.8.4: fix tape autostop & tape iconProDinner - ASP.NET MVC Sample (EF4.4, N-Tier, jQuery): 8: update to ASP.net MVC Awesome 3.0 udpate to EntityFramework 4.4 update to MVC 4 added dinners grid on homepageASP.net MVC Awesome - jQuery Ajax Helpers: 3.0: added Grid helper added XML Documentation added textbox helper added Client Side API for AjaxList removed .SearchButton from AjaxList AjaxForm and Confirm helpers have been merged into the Form helper optimized html output for AjaxDropdown, AjaxList, Autocomplete works on MVC 3 and 4BlogEngine.NET: BlogEngine.NET 2.7: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions. If you looking for Web Application Project, ...Launchbar: Launchbar 4.2.2.0: This release is the first step in cleaning up the code and using all the latest features of .NET 4.5 Changes 4.2.2 (2012-11-02) Improved handling of left clicks 4.1.0 (2012-10-17) Removed tray icon Assembly renamed and signed with strong name Note When you upgrade, Launchbar will start with the default settings. You can import your previous settings by following these steps: Run Launchbar and just save the settings without configuring anything Shutdown Launchbar Go to the folder %LOCA...CommonLibrary.NET: CommonLibrary.NET 0.9.8.8: Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.8 - Final ReleaseApplication: FluentScript Version: 0.9.8.8 Build: 0.9.8.8 Changeset: 77368 ( CommonLibrary.NET ) Release date: November 2nd, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com/ Download: http://commonlibrarynet.codeplex.com/releases/view/90426 Source code: http://common...Mouse Jiggler: MouseJiggle-1.3: This adds the much-requested minimize-to-tray feature to Mouse Jiggler.Umbraco CMS: Umbraco 4.10.0 Release Candidate: This is a Release Candidate, which means that if we do not find any major issues in the next week, we will release this version as the final release of 4.10.0 on November 9th, 2012. The documentation for the MVC bits still lives in the Github version of the docs for now and will be updated on our.umbraco.org with the final release of 4.10.0. Browse the documentation here: https://github.com/umbraco/Umbraco4Docs/tree/4.8.0/Documentation/Reference/Mvc If you want to do only MVC then make sur...Skype Auto Recorder: SkypeAutoRecorder 1.3.4: New icon and images. Reworked settings window. Implemented high-quality sound encoding. Implemented a possibility to produce stereo records. Added buttons with system-wide hot keys for manual starting and canceling of recording. Added buttons for opening folder with records. Added Help button. Fixed an issue when recording is continuing after call end. Fixed an issue when recording doesn't start. Fixed several bugs and improved stability. Major refactoring and optimization...Access 2010 Application Platform - Build Your Own Database: Application Platform - 0.0.2: Release 0.0.2 Created two new users. One belongs to the Administrators group and the other to the Public group. User: admin Pass: admin User: guest Pass: guest Initial Release This is the first version of the database. At the moment is all contained in one file to make development easier, but the obvious idea would be to split it into Front and Back End for a production version of the tool. The features it contains at the moment are the "Core" features.Python Tools for Visual Studio: Python Tools for Visual Studio 1.5: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. For a quick overview of the general IDE experience, please watch this video There are a number of exciting improvement in this release comp...mangopollo: Mangopollo 1.1: New classes : CycleTileData, IconicTileData, FlipTileData to mimic Windows Phone 8 API,BCF.Net: BCF.Net: BCF.Net-20121024 source codeAssaultCube Reloaded: 2.5.5: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: Fixed potential bot bugs: Map change, OpenAL...DirectX Tool Kit: October 30, 2012 (add WP8 support): October 30, 2012 Added project files for Windows Phone 8MCEBuddy 2.x: MCEBuddy 2.3.6: Changelog for 2.3.6 (32bit and 64bit) 1. Fixed a bug in multichannel audio conversion failure. AAC does not support 6 channel audio, MCEBuddy now checks for it and force the output to 2 channel if AAC codec is specified 2. Fixed a bug in Original Broadcast Date and Time. Original Broadcast Date and Time is reported in UTC timezone in WTV metadata. TVDB and MovieDB dates are reported in network timezone. It is assumed the video is recorded and converted on the same machine, i.e. local timezone...MVVM Light Toolkit: MVVM Light Toolkit V4.1 for Visual Studio 2012: This version only supports Visual Studio 2012 (and all Express editions too). If you use Visual Studio 2010, please stay tuned, we will publish an update in a few days with support for VS10. V4.1 supports: Windows Phone 8 Windows 8 (Windows RT) Silverlight 5 Silverlight 4 WPF 4.5 WPF 4 WPF 3.5 And the following development environments: Visual Studio 2012 (Pro, Premium, Ultimate) Visual Studio 2012 Express for Windows 8 Visual Studio 2012 Express for Windows Phone 8 Visual...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.73: Fix issue in Discussion #401101 (unreferenced var in a for-in statement was getting removed). add the grouping operator to the parsed output so that unminified parsed code is closer to the original. Will still strip unneeded parens later, if minifying. more cleaning of references as they are minified out of the code.RiP-Ripper & PG-Ripper: PG-Ripper 1.4.03: changes NEW: Added Support for the phun.org forum FIXED: Kitty-Kats new Forum UrlLiberty: v3.4.0.1 Release 28th October 2012: Change Log -Fixed -H4 Fixed the save verification screen showing incorrect mission and difficulty information for some saves -H4 Hopefully fixed the issue where progress did not save between missions and saves would not revert correctly -H3 Fixed crashes that occurred when trying to load player information -Proper exception dialogs will now show in place of crashesNew ProjectsAzure Storage Extensions For Storage Client V2.0: This library add LINQ syntax to the method Where on TableQuery<T> This support actually Windows Azure Storage Library V2.0BekkGitTfsDemo: A demo project for a speech on git-tfs.Booky: Booky is a utility bookmarking service which allows you to manage, share and find bookmarks to ultimately store bookmarks online for later use.BSA.Net: BSA.Net bzureC# to C++/CX Converter: Gives you the power of C++ at a cost of the simplicity of C#.Cloud Clipboard Sync: Share the clipboard content via cloud (ex. Dropbox).CricketDataMining: Btech Project on Cricket Data MiningCthulhu Invaders: Estudy project about design patternsDALHelper: Connect to your SQL Server database easily, efficiently, writing minimal code. Dwarf Fortress 2010 Backup Assistant: (Inspired by Minecraft Backup Assistant - http://minecraftbackup.codeplex.com/) Allows quick and easy backup/management of your DF savegames. Planned Features: * Rename saves and autosaves * Backup to zip/7-zip with metadata * Restore backupsInmobiliaria: Proyecto de nosotrosInventário com Código de Barras para Windows CE / Mobile: Aplicativo para inventário de produtos com código de barras. Executa em coletores de dados Windows CE ou Mobile.JQuery MVC in ASP.Net WebForms: The purpose of this project is to try and strip back a lot of the "bloat" in asp.net, use jquery and to build effective MVC in to WebFormsMar3ek's Download Manager 2: Mar3ek's Download Manager 2 - a simple, yet powerful, download manager for windows.Martian Shrimp: A small, simple and modular game making framework for modern browsers.MOJ: Moj is program created to help users organize and browse their virtual movie colectionNetFluid Starter Kit: NetFluid sample collection. Including : - Wallen garden controller - JSON RPC - Background thread web page - Dynamic image generation www.netfluid.orgO Library: The O Library is a SQL Framework to use with SQL Server 2008 & 2012 that enables developer to add a .Net feeling to their T-SQL code.Scanner With WIA2.0: warp implement WIA 2.0 with c#Simplify Workflow: A set of custom activities and templates extending Windows Workflow Foundation to simplify the development process for BPM or Workflow application.T4 C# Constructor Generator: T4 C# Constructor Generator is a T4 template for Visual Studio C# projects that lowers the overhead of the C# compiler by generating constructors.Unity.Mvc.Wcf: Removes the cross-cutting concern of managing WCF service clients used by your MVC controllers.WorkflowCode - Workflow in a Code framework: Intended to use by developers. As useful and simple as possible. It is the Workflow framework for a fast and readable code. With samples and tips.XNeon Netmedia Player: A Media Player, Social Networking Notifier and Internet Browser from the Founder of Aza DOSXooFoo: This project intends sharing tools for the XOOPS communityXoops France: Publications Xoops francophones (traductions noyau xoops, documentations, modules françisés, thèmes, plugin smarty, hacks, ...)Yasher (Yet Another Hasher): Compute hash of files or text. Supported algorithms : * MD5 * SHA1 * SHA256 * SHA384 * SHA512 * RIPEMD160

    Read the article

  • uiscrollview not switching image subviews

    - by nickthedude
    I'm building a comic viewer app, that consists of two view controllers, the root viewcontroller basically displays a view where a user decides what comic they want to read by pressing a button. The second viewController actually displays the comic as a uiscrollview with a toolbar and a title at the top. So the problem I am having is that the comic image panels themselves are not changing from whatever the first comic you go to if you select another comic after viewing the first one. The way I set it up, and I admit it's not exactly mvc, so please don't hate, anyway the way I set it up is each comic uiscrollview consists of x number of jpg images where each comic set's image names have a common prefix and then a number like 'funny1.jpg', 'funny2.jpg', 'funny3.jpg' and 'soda1.jpg', 'soda2.jpg', 'soda3.jpg', etc... so when a user selects a comic to view in the root controller it makes a call to the delegate and sets ivars on instances of the comicviewcontroller that belongs to the delegate (mainDelegate.comicViewController.property) I set the number of panels in that comic, the comic name for the title label, and the image prefix. The number of images changes(or at least the number that you can scroll through), and the title changes but for some reason the images are the same ones as whatever comic you clicked on initially. I'm basing this whole app off of the 'scrolling' code sample from apple. I thought if I added a viewWillAppear:(BOOL) animated call to the comicViewController everytime the user clicked the button that would fix it but it didn't, after all that is where the scrollview is laid out. Anyway here is some code from each of the two controllers: RootController: -(IBAction) launchComic2{ AppDelegate *mainDelegate = [(AppDelegate *) [UIApplication sharedApplication] delegate]; mainDelegate.myViewController.comicPageCount = 3; mainDelegate.myViewController.comicTitle.text = @"\"Death by ETOH\""; mainDelegate.myViewController.comicImagePrefix = @"etoh"; [mainDelegate.myViewController viewWillAppear:YES]; [mainDelegate.window addSubview: mainDelegate.myViewController.view]; comicViewController: -(void) viewWillAppear:(BOOL)animated { self.view.backgroundColor = [UIColor viewFlipsideBackgroundColor]; // 1. setup the scrollview for multiple images and add it to the view controller // // note: the following can be done in Interface Builder, but we show this in code for clarity [scrollView1 setBackgroundColor:[UIColor whiteColor]]; [scrollView1 setCanCancelContentTouches:NO]; scrollView1.indicatorStyle = UIScrollViewIndicatorStyleWhite; scrollView1.clipsToBounds = YES; // default is NO, we want to restrict drawing within our scrollview scrollView1.scrollEnabled = YES; // pagingEnabled property default is NO, if set the scroller will stop or snap at each photo // if you want free-flowing scroll, don't set this property. scrollView1.pagingEnabled = YES; // load all the images from our bundle and add them to the scroll view NSUInteger i; for (i = 1; i <= self.comicPageCount; i++) { NSString *imageName = [NSString stringWithFormat:@"%@%d.jpg", self.comicImagePrefix, i]; NSLog(@"%@%d.jpg", self.comicImagePrefix, i); UIImage *image = [UIImage imageNamed:imageName]; UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; // setup each frame to a default height and width, it will be properly placed when we call "updateScrollList" CGRect rect = imageView.frame; rect.size.height = kScrollObjHeight; rect.size.width = kScrollObjWidth; imageView.frame = rect; imageView.tag = i; // tag our images for later use when we place them in serial fashion [scrollView1 addSubview:imageView]; [imageView release]; } [self layoutScrollImages]; // now place the photos in serial layout within the scrollview } - (void)layoutScrollImages { UIImageView *view = nil; NSArray *subviews = [scrollView1 subviews]; // reposition all image subviews in a horizontal serial fashion CGFloat curXLoc = 0; for (view in subviews) { if ([view isKindOfClass:[UIImageView class]] && view.tag 0) { CGRect frame = view.frame; frame.origin = CGPointMake(curXLoc, 0); view.frame = frame; curXLoc += (kScrollObjWidth); } } // set the content size so it can be scrollable [scrollView1 setContentSize:CGSizeMake((self.comicPageCount * kScrollObjWidth), [scrollView1 bounds].size.height)]; } Any help would be appreciated on this. Nick

    Read the article

< Previous Page | 35 36 37 38 39 40 41  | Next Page >