Search Results

Search found 10194 results on 408 pages for 'raw types'.

Page 385/408 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • &lt;%: %&gt;, HtmlEncode, IHtmlString and MvcHtmlString

    - by Shaun
    One of my colleague and friend, Robin is playing and struggling with the ASP.NET MVC 2 on a project these days while I’m struggling with a annoying client. Since it’s his first time to use ASP.NET MVC he was meetings with a lot of problem and I was very happy to share my experience to him. Yesterday he asked me when he attempted to insert a <br /> element into his page he found that the page was rendered like this which is bad. He found his <br /> was shown as a part of the string rather than creating a new line. After checked a bit in his code I found that it’s because he utilized a new ASP.NET markup supported in .NET 4.0 – “<%: %>”. If you have been using ASP.NET MVC 1 or in .NET 3.5 world it would be very common that using <%= %> to show something on the page from the backend code. But when you do it you must ensure that the string that are going to be displayed should be Html-safe, which means all the Html markups must be encoded. Otherwise this might cause an XSS (cross-site scripting) problem. So that you’d better use the code like this below to display anything on the page. In .NET 4.0 Microsoft introduced a new markup to solve this problem which is <%: %>. It will encode the content automatically so that you will no need to check and verify your code manually for the XSS issue mentioned below. But this also means that it will encode all things, include the Html element you want to be rendered. So I changed his code like this and it worked well. After helped him solved this problem and finished a spreadsheet for my boring project I considered a bit more on the <%: %>. Since it will encode all thing why it renders correctly when we use “<%: Html.TextBox(“name”) %>” to show a text box? As you know the Html.TextBox will render a “<input name="name" id="name" type="text"/>” element on the page. If <%: %> will encode everything it should not display a text box. So I dig into the source code of the MVC and found some comments in the class MvcHtmlString. 1: // In ASP.NET 4, a new syntax <%: %> is being introduced in WebForms pages, where <%: expression %> is equivalent to 2: // <%= HttpUtility.HtmlEncode(expression) %>. The intent of this is to reduce common causes of XSS vulnerabilities 3: // in WebForms pages (WebForms views in the case of MVC). This involves the addition of an interface 4: // System.Web.IHtmlString and a static method overload System.Web.HttpUtility::HtmlEncode(object). The interface 5: // definition is roughly: 6: // public interface IHtmlString { 7: // string ToHtmlString(); 8: // } 9: // And the HtmlEncode(object) logic is roughly: 10: // - If the input argument is an IHtmlString, return argument.ToHtmlString(), 11: // - Otherwise, return HtmlEncode(Convert.ToString(argument)). 12: // 13: // Unfortunately this has the effect that calling <%: Html.SomeHelper() %> in an MVC application running on .NET 4 14: // will end up encoding output that is already HTML-safe. As a result, we're changing out HTML helpers to return 15: // MvcHtmlString where appropriate. <%= Html.SomeHelper() %> will continue to work in both .NET 3.5 and .NET 4, but 16: // changing the return types to MvcHtmlString has the added benefit that <%: Html.SomeHelper() %> will also work 17: // properly in .NET 4 rather than resulting in a double-encoded output. MVC developers in .NET 4 will then be able 18: // to use the <%: %> syntax almost everywhere instead of having to remember where to use <%= %> and where to use 19: // <%: %>. This should help developers craft more secure web applications by default. 20: // 21: // To create an MvcHtmlString, use the static Create() method instead of calling the protected constructor. The comment said the encoding rule of the <%: %> would be: If the type of the content is IHtmlString it will NOT encode since the IHtmlString indicates that it’s Html-safe. Otherwise it will use HtmlEncode to encode the content. If we check the return type of the Html.TextBox method we will find that it’s MvcHtmlString, which was implemented the IHtmlString interface dynamically. That is the reason why the “<input name="name" id="name" type="text"/>” was not encoded by <%: %>. So if we want to tell ASP.NET MVC, or I should say the ASP.NET runtime that the content is Html-safe and no need, or should not be encoded we can convert the content into IHtmlString. So another resolution would be like this. Also we can create an extension method as well for better developing experience. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace ShaunXu.Blogs.IHtmlStringIssue 8: { 9: public static class Helpers 10: { 11: public static MvcHtmlString IsHtmlSafe(this string content) 12: { 13: return MvcHtmlString.Create(content); 14: } 15: } 16: } Then the view would be like this. And the page rendered correctly.         Summary In this post I explained a bit about the new markup in .NET 4.0 – <%: %> and its usage. I also explained a bit about how to control the page content, whether it should be encoded or not. We can see the ASP.NET MVC gives us more points to control the web pages.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • HTG Explains: Should You Buy Extended Warranties?

    - by Chris Hoffman
    Buy something at an electronics store and you’ll be confronted by a pushy salesperson who insists you need an extended warranty. You’ll also see extended warranties pushed hard when shopping online. But are they worth it? There’s a reason stores push extended warranties so hard. They’re almost always pure profit for the store involved. An electronics store may live on razor-thin product margins and make big profits on extended warranties and overpriced HDMI cables. You’re Already Getting Multiple Warranties First, back up. The product you’re buying already includes a warranty. In fact, you’re probably getting several different types of warranties. Store Return and Exchange: Most electronics stores allow you to return a malfunctioning product within the first 15 or 30 days and they’ll provide you with a new one. The exact period of time will vary from store to store. If you walk out of the store with a defective product and have to swap it for a new one within the first few weeks, this should be easy. Manufacturer Warranty: A device’s manufacturer — whether the device is a laptop, a television, or a graphics card — offers their own warranty period. The manufacturer warranty covers you after the store refuses to take the product back and exchange it. The length of this warranty depends on the type of product. For example, a cheap laptop may only offer a one-year manufacturer warranty, while a more expensive laptop may offer a two-year warranty. Credit Card Warranty Extension: Many credit cards offer free extended warranties on products you buy with that credit card. Credit card companies will often give you an additional year of warranty. For example, if you buy a laptop with a two year warranty and it fails in the third year, you could then contact your credit card company and they’d cover the cost of fixing or replacing it. Check your credit card’s benefits and fine print for more information. Why Extended Warranties Are Bad You’re already getting a fairly long warranty period, especially if you have a credit card that offers you a free extended warranty — these are fairly common. If the product you get is a “lemon” and has a manufacturing error, it will likely fail pretty soon — well within your warranty period. The extended warranty matters after all your other warranties are exhausted. In the case of a laptop with a two-year warranty that you purchase with a credit card giving you a one-year warranty extension, your extended warranty will kick in three years after you purchase the laptop. In that many years, your current laptop will likely feel pretty old and laptops that are as good — or better — will likely be pretty cheap. If it’s a television, better television displays will be available at a lower price point. You’ll either want to upgrade to a newer model or you’ll be able to buy a new, just-as-good product for very cheap. You’ll only have to pay out-of-pocket if your device fails after the normal warranty period — in over two or three years for typical laptops purchased with a decent credit card. Save the money you would have spent on the warranty and put it towards a future upgrade. How Much Do Extended Warranties Cost? Let’s look at an example from a typical pushy retail outlet, Best Buy. We went to Best Buy’s website and found a pretty standard $600 Samsung laptop. This laptop comes with a one-year warranty period. If purchased with a fairly common credit card, you can easily get a two-year warranty period on this laptop without spending an additional penny. (Yes, such credit cards are available with no yearly fees.) During the check-out process, Best Buy tries to sell you a Geek Squad “Accidental Protection Plan.” To get an additional year of Best Buy’s extended warranty, you’d have to pay $324.98 for a “3-Year Accidental Protection Plan”. You’d basically be paying more than half the price of your laptop for an additional year of warranty — remember, the standard warranties would cover you anyway for the first two years. If this laptop did break sometime between two and three years from now, we wouldn’t be surprised if you could purchase a comparable laptop for about $325 anyway. And, if you don’t need to replace it, you’ve saved that money. Best Buy would object that this isn’t a standard extended warranty. It’s a supercharged warranty plan that will also provide coverage if you spill something on your laptop or drop it and break it. You just have to ask yourself a question. What are the odds that you’ll drop your laptop or spill something on it? They’re probably pretty low if you’re a typical human being. Is it worth spending more than half the price of the laptop just in case you’ll make an uncommon mistake? Probably not. There may be occasional exceptions to this — some Apple users swear by Apple’s AppleCare, for example — but you should generally avoid buying these things. There’s a reason stores are so pushy about extended warranties, and it’s not because they want to help protect you. It’s because they’re making lots of profit from these plans, and they’re making so much profit because they’re not a good deal for customers. Image Credit: Philip Taylor on Flickr     

    Read the article

  • Oracle B2B - Synchronous Request Reply

    - by cdwright
    Introduction So first off, let me say I didn't create this demo (although I did modify it some). I got it from a member of the B2B development technical staff. Since it came with only a simple readme file, I thought I would take some time and write a more detailed explanation about how it works. Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching the demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool (I'm using Firefox Poster here). You can download the zip file containing the demo here. The demo works by sending the sample xml request file (req.xml) to http://<b2bhost>:8001/b2b/syncreceiver using the SOAP test tool.  The syncreceiver servlet keeps the socket connection open between itself and the test tool so that it can synchronously send the reply message back. When B2B receives the inbound request message, it is passed to the SOA composite through the default B2B Fabric binding. A simple reply is created in BPEL and returned to B2B which then sends the message back to the test tool using that same socket connection. I’ll show you the B2B configuration first, then we’ll look at the soa composite. Configuring B2B No additional configuration necessary in order to use the syncreceiver servlet. It is already running when you start SOA. After importing the GC_SyncReqRep.zip repository file into B2B, you’ll have the typical GlobalChips host trading partner and the Acme remote trading partner. Document Management The repository contains two very simple custom XML document definitions called Orders and OrdersResponse. In order to determine the trading partner agreement needed to process the inbound Orders document, you need to know two things about it; what is it and where it came from. So let’s look at how B2B identifies the appropriate document definition for the message. The XSD’s for these two document definitions themselves are not particularly interesting. Whenever you're dealing with custom XML documents, B2B identifies the appropriate document definition for each XML message using an XPath Identification Expression. The expression is entered for each of these document definitions under the document administration tab in the B2B console. The full XPATH expression for the Orders document is  //*[local-name()='shiporder']/*[local-name()='shipto']/*[local-name()='name']/text(). You can see this path in the XSD diagram below and how it uniquely identifies this message. The OrdersReponse document is identified in the same way. The XPath expression for it is //*[local-name()='Response']/*[local-name()='Status']/text(). You can see how it’s path differs uniquely identifying the reply from the request. Trading Partner Profile The trading partner profiles are very simple too. For GlobalChips, a generic identifier is being used to identify the sender of the response document using the host trading partner name. For Acme, a generic identifier is also being used to identify the sender of the inbound request using the remote trading partner name. The document types are added for the remote trading partner as usual. So the remote trading partner Acme is the sender of the Orders document, and it is the receiver of the OrdersResponse document. For the remote trading partner only, there needs to be a dummy channel which gets used in the outbound response agreement. The channel is not actually used. It is just a necessary place holder that needs to be there when creating the agreement. Trading Partner Agreement The agreements are equally simple. There is no validation and translation is not an option for a custom XML document type. For the InboundAgreement (request) the document definition is set to OrdersDef. In the Agreement Parameters section the generic identifiers have been added for the host and remote trading partners. That’s all that is needed for the inbound transaction. For the OutboundAgreement (response), the document definition is set to OrdersResponseDef and the generic identifiers for the two trading partners are added. The remote trading partner dummy delivery channel is also added to the agreement. SOA Composite Import the SOA composite archive into JDeveloper as an EJB JAR file. Open the composite and you should have a project that looks like this. In the composite, open the b2bInboundSyncSvc exposed service and advance through the setup wizard. Select your Application Server Connection and advance to the Operations window. Notice here that the B2B binding is set to Receive. It is not set for Synchronous Request Reply. Continue advancing through the wizard as you normally would and select finish at the end. Now open BPELProcess1 in the composite. The BPEL process is set as a Synchronous Request Reply as you can see below. The while loop is there just to give the process something to do. The actual reply message is prepared in the assignResponseValues assignment followed by an Invoke of the B2B binding. Open the replyResponse Invoke and go to the properties tab. You’ll see that the fromTradingPartnerId, toTradingPartner, documentTypeName, and documentProtocolRevision properties have been set. Testing the Configuration To test the configuration, I used Firefox Poster. Enter the URL for the b2b/syncreceiver servlet and browse for the req.xml file that contains the test request message. In the Headers tab, add the property ‘from’ and give it the value ‘Acme’. This is how B2B will know where the message is coming from and it will use that information along with the document type name to find the right trading partner agreement. Now post the message. You should get back a response with a status of ‘200 OK’. That’s all there is to it.

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • Screenshot Tour: Ubuntu Touch 14.04 on a Nexus 7

    - by Chris Hoffman
    Ubuntu 14.04 LTS will “form the basis of the first commercially available Ubuntu tablets,” according to Canonical. We installed Ubuntu Touch 14.04 on our own hardware to see what those tablets will be like. We don’t recommend installing this yourself, as it’s still not a polished, complete experience. We’re using “Ubuntu Touch” as shorthand here — apparently this project’s new name is “Ubuntu For Devices.” The Welcome Screen Ubuntu’s touch interface is all about edge swipes and hidden interface elements — it has a lot in common with Windows 8, actually. You’ll see the welcome screen when you boot up or unlock a Ubuntu tablet or phone. If you have new emails, text messages, or other information, it will appear on this screen along with the time and date. If you don’t, you’ll just see a message saying “No data sources available.” The Dash Swipe in from the right edge of the welcome screen to access the Dash, or home screen. This is actually very similar to the Dash on Ubuntu’s Unity desktop. This isn’t a surprise — Canonical wants the desktop and touch versions of Ubuntu to use the same code. In the future, the desktop and touch versions of Ubuntu will use the same version of Unity and Unity will adjust its interface depending on what type of device your’e using. Here you’ll find apps you have installed and apps available to install. Tap an installed app to launch it or tap an available app to view more details and install it. Tap the My apps or Available headings to view a complete list of apps you have installed or apps you can install. Tap the Search box at the top of the screen to start searching — this is how you’d search for new apps to install. As you’d expect, a touch keyboard appears when you tap in the Search field or any other text field. The launcher isn’t just for apps. Tap the Apps heading at the top of the screen and you’ll see hidden text appear — Music, Video, and Scopes. This hidden navigation is used throughout Ubuntu’s different apps and can be easy to miss at first. Swipe to the left or right to move between these screens. These screens are also similar to the different panels in Unity on the desktop. The Scopes section allows you to view different search scopes you have installed. These are used to search different sources when you start a search from the Dash. Search from the Music or Videos scopes to search for local media files on your device or media files online. For example, searching in the Music scope will show you music results from Grooveshark by default. Navigating Ubuntu Touch Swipe in from the left edge anywhere on the system to open the launcher, a bar with shortcuts to apps. This launcher is very similar to the launcher on the left of Ubuntu’s Unity desktop — that’s the whole idea, after all. Once you’ve opened an app, you can leave the app by swiping in from the left. The launcher will appear — keep moving your finger towards the right edge of teh screen. This will swipe the current app off the screen, taking you back to the Dash. Once back on the Dash, you’ll see your open apps represented as thumbnails under Recent. Tap a thumbnail here to go back to a running app. To remove an app from here, long-press it and tap the X button that appears. Swipe in from the right edge in any app to quickly switch between recent apps. Swipe in from the right edge and hold your finger down to reveal an application switcher that shows all your recent apps and lets you choose between them. Swipe down from the top of the screen to access the indicator panel. Here you can connect to Wi-Fi networks, view upcoming events, control GPS and Bluetooth hardware, adjust sound settings, see incoming messages, and more. This panel is for quick access to hardware settings and notifications, just like the indicators on Ubuntu’s Unity desktop. The Apps System settings not included in the pull-down panel are available in the System Settings app. To access it, tap My apps on the Dash and tap System Settings, search for the System Settings app, or open the launcher bar and tap the settings icon. The settings here a bit limited compared to other operating systems, but many of the important options are available here. You can add Evernote, Ubuntu One, Twitter, Facebook, and Google accounts from here. A free Ubuntu One account is mandatory for downloading and updating apps. A Google account can be used to sync contacts and calendar events. Some apps on Ubuntu are native apps, while many are web apps. For example, the Twitter, Gmail, Amazon, Facebook, and eBay apps included by default are all web apps that open each service’s mobile website as an app. Other applications, such as the Weather, Calendar, Dialer, Calculator, and Notes apps are native applications. Theoretically, both types of apps will be able to scale to different screen resolutions. Ubuntu Touch and Ubuntu desktop may one day share the same apps, which will adapt to different display sizes and input methods. Like Windows 8 apps, Ubuntu apps hide interface elements by default, providing you with a full-screen view of the content. Swipe up from the bottom of an app’s screen to view its interface elements. For example, swiping up from the bottom of the Web Browser app reveals Back, Forward, and Refresh buttons, along with an address bar and Activity button so you can view current and recent web pages. Swipe up even more from the bottom and you’ll see a button hovering in the middle of the app. Tap the button and you’ll see many more settings. This is an overflow area for application options and functions that can’t fit on the navigation bar. The Terminal app has a few surprising Easter eggs in this panel, including a “Hack into the NSA” option. Tap it and the following text will appear in the terminal: That’s not very nice, now tracing your location . . . . . . . . . . . .Trace failed You got away this time, but don’t try again. We’d expect to see such Easter eggs disappear before Ubuntu Touch actually ships on real devices. Ubuntu Touch has come a long way, but it’s still not something you want to use today. For example, it doesn’t even have a built-in email client — you’ll have to us your email service’s mobile website. Few apps are available, and many of the ones that are are just mobile websites. It’s not a polished operating system intended for normal users yet — it’s more of a preview for developers and device manufacturers. If you really want to try it yourself, you can install it on a Wi-Fi Nexus 7 (2013), Nexus 10, or Nexus 4 device. Follow Ubuntu’s installation instructions here.

    Read the article

  • A more concise example that illustrates that type inference can be very costly?

    - by mrrusof
    It was brought to my attention that the cost of type inference in a functional language like OCaml can be very high. The claim is that there is a sequence of expressions such that for each expression the length of the corresponding type is exponential on the length of the expression. I devised the sequence below. My question is: do you know of a sequence with more concise expressions that achieves the same types? # fun a -> a;; - : 'a -> 'a = <fun> # fun b a -> b a;; - : ('a -> 'b) -> 'a -> 'b = <fun> # fun c b a -> c b (b a);; - : (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'c = <fun> # fun d c b a -> d c b (c b (b a));; - : ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'd = <fun> # fun e d c b a -> e d c b (d c b (c b (b a)));; - : (((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'd -> 'e) -> ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'e = <fun> # fun f e d c b a -> f e d c b (e d c b (d c b (c b (b a))));; - : ((((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'd -> 'e) -> ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'e -> 'f) -> (((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'd -> 'e) -> ((('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'c -> 'd) -> (('a -> 'b) -> 'b -> 'c) -> ('a -> 'b) -> 'a -> 'f = <fun>

    Read the article

  • Portable class libraries and fetching JSON

    - by Jeff
    After much delay, we finally have the Windows Phone 8 SDK to go along with the Windows 8 Store SDK, or whatever ridiculous name they’re giving it these days. (Seriously… that no one could come up with a suitable replacement for “metro” is disappointing in an otherwise exciting set of product launches.) One of the neat-o things is the potential for code reuse, particularly across Windows 8 and Windows Phone 8 apps. This is accomplished in part with portable class libraries, which allow you to share code between different types of projects. With some other techniques and quasi-hacks, you can share some amount of code, and I saw it mentioned in one of the Build videos that they’re seeing as much as 70% code reuse. Not bad. However, I’ve already hit a super annoying snag. It appears that the HttpClient class, with its idiot-proof async goodness, is not included in the Windows Phone 8 class libraries. Shock, gasp, horror, disappointment, etc. The delay in releasing it already caused dismay among developers, and I’m sure this won’t help. So I started refactoring some code I already had for a Windows 8 Store app (ugh) to accommodate the use of HttpWebRequest instead. I haven’t tried it in a Windows Phone 8 project beyond compiling, but it appears to work. I used this StackOverflow answer as a starting point since it’s been a long time since I used HttpWebRequest, and keep in mind that it has no exception handling. It needs refinement. The goal here is to new up the client, and call a method that returns some deserialized JSON objects from the Intertubes. Adding facilities for headers or cookies is probably a good next step. You need to use NuGet for a Json.NET reference. So here’s the start: using System.Net; using System.Threading.Tasks; using Newtonsoft.Json; using System.IO; namespace MahProject {     public class ServiceClient<T> where T : class     {         public ServiceClient(string url)         {             _url = url;         }         private readonly string _url;         public async Task<T> GetResult()         {             var response = await MakeAsyncRequest(_url);             var result = JsonConvert.DeserializeObject<T>(response);             return result;         }         public static Task<string> MakeAsyncRequest(string url)         {             var request = (HttpWebRequest)WebRequest.Create(url);             request.ContentType = "application/json";             Task<WebResponse> task = Task.Factory.FromAsync(                 request.BeginGetResponse,                 asyncResult => request.EndGetResponse(asyncResult),                 null);             return task.ContinueWith(t => ReadStreamFromResponse(t.Result));         }         private static string ReadStreamFromResponse(WebResponse response)         {             using (var responseStream = response.GetResponseStream())                 using (var reader = new StreamReader(responseStream))                 {                     var content = reader.ReadToEnd();                     return content;                 }         }     } } Calling it in some kind of repository class may look like this, if you wanted to return an array of Park objects (Park model class omitted because it doesn’t matter): public class ParkRepo {     public async Task<Park[]> GetAllParks()     {         var client = new ServiceClient<Park[]>(http://superfoo/endpoint);         return await client.GetResult();     } } And then from inside your WP8 or W8S app (see what I did there?), when you load state or do some kind of UI event handler (making sure the method uses the async keyword): var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something Hopefully this saves you a little time.

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Android Remote Service Keeps Restarting

    - by user244190
    Ok so I've built an app that uses a remote service to do some real time GPS tracking. I am using the below code to start and bind to the service. The remote service uses aidl, sets up a notification icon, runs the GPS and locationListener. In onLocationChanged, a handler sends data back to the caller via the callback. Pretty much straight out of the examples and resources online. I want to allow the service to continue running even if the app closes. When the app is restarted, I want the app to again bind to the service (using the existing service if running) and again receive data from the tracker. I currently have the app mode set to singleTask and cannot use singleinstance due to another issue. My problem is that quit often even after the app and service are shut down either from the app itself, or from AdvancedTaskKiller, or a Forceclose, the service will restart and initialize the GPS. touching on the notification will open the app. I again stop the tracking which removes the notification and turns off the GPS Close the app, and again after a few seconds the service restarts. The only way to stop it is to power off the phone. What can I do to stop this from happening. Does it have to do with the mode of operation? START_NOT_STICKY or START_REDELIVER_INTENT? Or do I need to use stopSelf()? My understanding is that if the service is not running when I use bindService() that the service will be created...so do I really need to use start/stopService also? I thought I would need to use it if I want the service to run even after the app is closed. That is why i do not unbind/stop the service in onDestroy(). Is this correct? I've not seen any other info an this, so I,m not sure where to look. Please Help! Thanks Patrick //Remote Service Startup try{ startService(); }catch (Exception e) { Toast.makeText(ctx, e.getMessage().toString(), Toast.LENGTH_SHORT).show(); } } try{ bindService(); }catch (Exception e) { Toast.makeText(ctx, e.getMessage().toString(), Toast.LENGTH_SHORT).show(); } //Remote service shutdown try { unbindService(); }catch(Exception e) { Toast.makeText(ctx, e.getMessage().toString(), Toast.LENGTH_SHORT).show(); } try{ stopService(); }catch(Exception e) { Toast.makeText(ctx, e.getMessage().toString(), Toast.LENGTH_SHORT).show(); } private void startService() { if( myAdapter.trackServiceStarted() ) { if(SETTING_DEBUG_MODE) Toast.makeText(this, "Service already started", Toast.LENGTH_SHORT).show(); started = true; if(!myAdapter.trackDataExists()) insertTrackData(); updateServiceStatus(); } else { startService( new Intent ( "com.codebase.TRACKING_SERVICE" ) ); Log.d( "startService()", "startService()" ); started = true; updateServiceStatus(); } } private void stopService() { stopService( new Intent ( "com.codebase.TRACKING_SERVICE" ) ); Log.d( "stopService()", "stopService()" ); started = false; updateServiceStatus(); } private void bindService() { bindService(new Intent(ITrackingService.class.getName()), mConnection, Context.BIND_AUTO_CREATE); bindService(new Intent(ITrackingSecondary.class.getName()), mTrackingSecondaryConnection, Context.BIND_AUTO_CREATE); started = true; } private void unbindService() { try { mTrackingService.unregisterCallback(mCallback); } catch (RemoteException e) { // There is nothing special we need to do if the service // has crashed. e.getMessage(); } try { unbindService(mTrackingSecondaryConnection); unbindService(mConnection); } catch (Exception e) { // There is nothing special we need to do if the service // has crashed. e.getMessage(); } started = false; } private ServiceConnection mConnection = new ServiceConnection() { public void onServiceConnected(ComponentName className, IBinder service) { // This is called when the connection with the service has been // established, giving us the service object we can use to // interact with the service. We are communicating with our // service through an IDL interface, so get a client-side // representation of that from the raw service object. mTrackingService = ITrackingService.Stub.asInterface(service); // We want to monitor the service for as long as we are // connected to it. try { mTrackingService.registerCallback(mCallback); } catch (RemoteException e) { // In this case the service has crashed before we could even // do anything with it; we can count on soon being // disconnected (and then reconnected if it can be restarted) // so there is no need to do anything here. } } public void onServiceDisconnected(ComponentName className) { // This is called when the connection with the service has been // unexpectedly disconnected -- that is, its process crashed. mTrackingService = null; } }; private ServiceConnection mTrackingSecondaryConnection = new ServiceConnection() { public void onServiceConnected(ComponentName className, IBinder service) { // Connecting to a secondary interface is the same as any // other interface. mTrackingSecondaryService = ITrackingSecondary.Stub.asInterface(service); try{ mTrackingSecondaryService.setTimePrecision(SETTING_TIME_PRECISION); mTrackingSecondaryService.setDistancePrecision(SETTING_DISTANCE_PRECISION); } catch (RemoteException e) { // In this case the service has crashed before we could even // do anything with it; we can count on soon being // disconnected (and then reconnected if it can be restarted) // so there is no need to do anything here. } } public void onServiceDisconnected(ComponentName className) { mTrackingSecondaryService = null; } }; //TrackService onDestry() public void onDestroy() { try{ if(lm != null) { lm.removeUpdates(this); } if(mNotificationManager != null) { mNotificationManager.cancel(R.string.local_service_started); } Toast.makeText(this, "Service stopped", Toast.LENGTH_SHORT).show(); }catch (Exception e){ Toast.makeText(this, e.getMessage(), Toast.LENGTH_SHORT).show(); } // Unregister all callbacks. mCallbacks.kill(); // Remove the next pending message to increment the counter, stopping // the increment loop. mHandler.removeMessages(REPORT_MSG); super.onDestroy(); } ServiceConnectionLeaked: I'm seeing a lot of these: 04-21 09:25:23.347: ERROR/ActivityThread(3246): Activity com.codebase.GPSTest has leaked ServiceConnection com.codebase.GPSTest$6@4482d428 that was originally bound here 04-21 09:25:23.347: ERROR/ActivityThread(3246): android.app.ServiceConnectionLeaked: Activity com.codebase.GPSTest has leaked ServiceConnection com.codebase.GPSTest$6@4482d428 that was originally bound here 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread$PackageInfo$ServiceDispatcher.<init>(ActivityThread.java:977) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread$PackageInfo.getServiceDispatcher(ActivityThread.java:872) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ApplicationContext.bindService(ApplicationContext.java:796) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.content.ContextWrapper.bindService(ContextWrapper.java:337) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at com.codebase.GPSTest.bindService(GPSTest.java:2206) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at com.codebase.GPSTest.onStartStopClick(GPSTest.java:1589) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at com.codebase.GPSTest.onResume(GPSTest.java:1210) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.Instrumentation.callActivityOnResume(Instrumentation.java:1149) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.Activity.performResume(Activity.java:3763) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2937) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:2965) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2516) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread.handleRelaunchActivity(ActivityThread.java:3625) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread.access$2300(ActivityThread.java:119) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1867) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.os.Handler.dispatchMessage(Handler.java:99) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.os.Looper.loop(Looper.java:123) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at android.app.ActivityThread.main(ActivityThread.java:4363) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at java.lang.reflect.Method.invokeNative(Native Method) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at java.lang.reflect.Method.invoke(Method.java:521) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 04-21 09:25:23.347: ERROR/ActivityThread(3246): at dalvik.system.NativeStart.main(Native Method) And These: Is this ok, or do I need to make sure i deactivate/close 04-21 09:58:55.487: INFO/dalvikvm(3440): Uncaught exception thrown by finalizer (will be discarded): 04-21 09:58:55.487: INFO/dalvikvm(3440): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@447ef258 on gps_data that has not been deactivated or closed 04-21 09:58:55.487: INFO/dalvikvm(3440): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 04-21 09:58:55.487: INFO/dalvikvm(3440): at dalvik.system.NativeStart.run(Native Method)

    Read the article

  • Please help me understand why my XSL Transform is not transforming

    - by Damovisa
    I'm trying to transform one XML format to another using XSL. Try as I might, I can't seem to get a result. I've hacked away at this for a while now and I've had no success. I'm not even getting any exceptions. I'm going to post the entire code and hopefully someone can help me work out what I've done wrong. I'm aware there are likely to be problems in the xsl I have in terms of selects and matches, but I'm not fussed about that at the moment. The output I'm getting is the input XML without any XML tags. The transformation is simply not occurring. Here's my XML Document: <?xml version="1.0"?> <Transactions> <Account> <PersonalAccount> <AccountNumber>066645621</AccountNumber> <AccountName>A Smith</AccountName> <CurrentBalance>-200125.96</CurrentBalance> <AvailableBalance>0</AvailableBalance> <AccountType>LOAN</AccountType> </PersonalAccount> </Account> <StartDate>2010-03-01T00:00:00</StartDate> <EndDate>2010-03-23T00:00:00</EndDate> <Items> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>12000</Amount> <Reference>Transaction 1</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-15T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>11000</Amount> <Reference>Transaction 2</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-14T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> </Items> </Transactions> Here's my XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" /> <xsl:param name="currentdate"></xsl:param> <xsl:template match="Transactions"> <xsl:element name="OFX"> <xsl:element name="SIGNONMSGSRSV1"> <xsl:element name="SONRS"> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="DTSERVER"><xsl:value-of select="$currentdate" /></xsl:element> <xsl:element name="LANGUAGE">ENG</xsl:element> </xsl:element> </xsl:element> <xsl:element name="BANKMSGSRSV1"> <xsl:element name="STMTTRNRS"> <xsl:element name="TRNUID">1</xsl:element> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="STMTRS"> <xsl:element name="CURDEF">AUD</xsl:element> <xsl:element name="BANKACCTFROM"> <xsl:element name="BANKID">RAMS</xsl:element> <xsl:element name="ACCTID"><xsl:value-of select="Account/PersonalAccount/AccountNumber" /></xsl:element> <xsl:element name="ACCTTYPE"><xsl:value-of select="Account/PersonalAccount/AccountType" /></xsl:element> </xsl:element> <xsl:element name="BANKTRANLIST"> <xsl:element name="DTSTART"><xsl:value-of select="StartDate" /></xsl:element> <xsl:element name="DTEND"><xsl:value-of select="EndDate" /></xsl:element> <xsl:for-each select="Items/Transaction"> <xsl:element name="STMTTRN"> <xsl:element name="TRNTYPE"><xsl:choose><xsl:when test="IsCredit">CREDIT</xsl:when><xsl:otherwise>DEBIT</xsl:otherwise></xsl:choose></xsl:element> <xsl:element name="DTPOSTED"><xsl:value-of select="EffectiveDate" /></xsl:element> <xsl:element name="DTUSER"><xsl:value-of select="CreatedDate" /></xsl:element> <xsl:element name="TRNAMT"><xsl:value-of select="Amount" /></xsl:element> <xsl:element name="FITID" /> <xsl:element name="NAME"><xsl:value-of select="Reference" /></xsl:element> <xsl:element name="MEMO"><xsl:value-of select="Reference" /></xsl:element> </xsl:element> </xsl:for-each> </xsl:element> <xsl:element name="LEDGERBAL"> <xsl:element name="BALAMT"><xsl:value-of select="Account/PersonalAccount/CurrentBalance" /></xsl:element> <xsl:element name="DTASOF"><xsl:value-of select="EndDate" /></xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:template> </xsl:stylesheet> Here's my method to transform my XML: public string TransformToXml(XmlElement xmlElement, Dictionary<string, object> parameters) { string strReturn = ""; // Load the XSLT Document XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load(xsltFileName); // arguments XsltArgumentList args = new XsltArgumentList(); if (parameters != null && parameters.Count > 0) { foreach (string key in parameters.Keys) { args.AddParam(key, "", parameters[key]); } } //Create a memory stream to write to Stream objStream = new MemoryStream(); // Apply the transform xslt.Transform(xmlElement, args, objStream); objStream.Seek(0, SeekOrigin.Begin); // Read the contents of the stream StreamReader objSR = new StreamReader(objStream); strReturn = objSR.ReadToEnd(); return strReturn; } The contents of strReturn is an XML tag (<?xml version="1.0" encoding="utf-8"?>) followed by a raw dump of the contents of the original XML document, stripped of XML tags. What am I doing wrong here?

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • socket operation on nonsocket or bad file descriptor

    - by Magn3s1um
    I'm writing a pthread server which takes requests from clients and sends them back a bunch of .ppm files. Everything seems to go well, but sometimes when I have just 1 client connected, when trying to read from the file descriptor (for the file), it says Bad file Descriptor. This doesn't make sense, since my int fd isn't -1, and the file most certainly exists. Other times, I get this "Socket operation on nonsocket" error. This is weird because other times, it doesn't give me this error and everything works fine. When trying to connect multiple clients, for some reason, it will only send correctly to one, and then the other client gets the bad file descriptor or "nonsocket" error, even though both threads are processing the same messages and do the same routines. Anyone have an idea why? Here's the code that is giving me that error: while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); The messages for both threads are the same, being of the form ./path/imageXX.ppm where XX is the number that should go to the client. The file size of each image is 58368 bytes. Sometimes, this code hangs on the read, and stops execution. I don't know this would be either, because the file descriptor comes back as valid. Thanks in advanced. Edit: Here's some sample output: Sending to client a: ./support/images/sw90.ppm This is fd 4 Error: : Socket operation on non-socket Sending to client a: ./support/images/sw91.ppm This is fd 4 Error: : Socket operation on non-socket Sending ./support/images/sw92.ppm This is fd 4 I am hhere2 Error: : Socket operation on non-socket My dispatcher has defeated evil Sample with 2 clients (client b was serviced first) Sending to client b: ./support/images/sw87.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw88.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw89.ppm This is fd 6 Error: : Success This is fd 6 Error: : Bad file descriptor Sending to client a: ./support/images/sw85.ppm This is fd 6 Error: As you can see, who ever is serviced first in this instance can open the files, but not the 2nd person. Edit2: Full code. Sorry, its pretty long and terribly formatted. #include <netinet/in.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <sys/types.h> #include <sys/socket.h> #include <errno.h> #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include "ring.h" /* Version 1 Here is what is implemented so far: The threads are created from the arguments specified (number of threads that is) The server will lock and update variables based on how many clients are in the system and such. The socket that is opened when a new client connects, must be passed to the threads. To do this, we need some sort of global array. I did this by specifying an int client and main_pool_busy, and two pointers poolsockets and nonpoolsockets. My thinking on this was that when a new client enters the system, the server thread increments the variable client. When a thread is finished with this client (after it sends it the data), the thread will decrement client and close the socket. HTTP servers act this way sometimes (they terminate the socket as soon as one transmission is sent). *Note down at bottom After the server portion increments the client counter, we must open up a new socket (denoted by new_sd) and get this value to the appropriate thread. To do this, I created global array poolsockets, which will hold all the socket descriptors for our pooled threads. The server portion gets the new socket descriptor, and places the value in the first spot of the array that has a 0. We only place a value in this array IF: 1. The variable main_pool_busy < worknum (If we have more clients in the system than in our pool, it doesn't mean we should always create a new thread. At the end of this, the server signals on the condition variable clientin that a new client has arrived. In our pooled thread, we then must walk this array and check the array until we hit our first non-zero value. This is the socket we will give to that thread. The thread then changes the array to have a zero here. What if our all threads in our pool our busy? If this is the case, then we will know it because our threads in this pool will increment main_pool_busy by one when they are working on a request and decrement it when they are done. If main_pool_busy >= worknum, then we must dynamically create a new thread. Then, we must realloc the size of our nonpoolsockets array by 1 int. We then add the new socket descriptor to our pool. Here's what we need to figure out: NOTE* Each worker should generate 100 messages which specify the worker thread ID, client socket descriptor and a copy of the client message. Additionally, each message should include a message number, starting from 0 and incrementing for each subsequent message sent to the same client. I don't know how to keep track of how many messages were to the same client. Maybe we shouldn't close the socket descriptor, but rather keep an array of structs for each socket that includes how many messages they have been sent. Then, the server adds the struct, the threads remove it, then the threads add it back once they've serviced one request (unless the count is 100). ------------------------------------------------------------- CHANGES Version 1 ---------- NONE: this is the first version. */ #define MAXSLOTS 30 #define dis_m 15 //problems with dis_m ==1 //Function prototypes void inc_clients(); void init_mutex_stuff(pthread_t*, pthread_t*); void *threadpool(void *); void server(int); void add_to_socket_pool(int); void inc_busy(); void dec_busy(); void *dispatcher(); void create_message(long, int, int, char *, char *); void init_ring(); void add_to_ring(char *, char *, int, int, int); int socket_from_string(char *); void add_to_head(char *); void add_to_tail(char *); struct message * reorder(struct message *, struct message *, int); int get_threadid(char *); void delete_socket_messages(int); struct message * merge(struct message *, struct message *, int); int get_request(char *, char *, char*); ///////////////////// //Global mutexes and condition variables pthread_mutex_t startservice; pthread_mutex_t numclients; pthread_mutex_t pool_sockets; pthread_mutex_t nonpool_sockets; pthread_mutex_t m_pool_busy; pthread_mutex_t slots; pthread_mutex_t numm; pthread_mutex_t circ; pthread_cond_t clientin; pthread_cond_t m; /////////////////////////////////////// //Global variables int clients; int main_pool_busy; int * poolsockets, nonpoolsockets; int worknum; struct ring mqueue; /////////////////////////////////////// int main(int argc, char ** argv){ //error handling if not enough arguments to program if(argc != 3){ printf("Not enough arguments to server: ./server portnum NumThreadsinPool\n"); _exit(-1); } //Convert arguments from strings to integer values int port = atoi(argv[1]); worknum = atoi(argv[2]); //Start server portion server(port); } /////////////////////////////////////////////////////////////////////////////////////////////// //The listen server thread///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////// void server(int port){ int sd, new_sd; struct sockaddr_in name, cli_name; int sock_opt_val = 1; int cli_len; pthread_t threads[worknum]; //create our pthread id array pthread_t dis[1]; //create our dispatcher array (necessary to create thread) init_mutex_stuff(threads, dis); //initialize mutexes and stuff //Server setup /////////////////////////////////////////////////////// if ((sd = socket (AF_INET, SOCK_STREAM, 0)) < 0) { perror("(servConn): socket() error"); _exit (-1); } if (setsockopt (sd, SOL_SOCKET, SO_REUSEADDR, (char *) &sock_opt_val, sizeof(sock_opt_val)) < 0) { perror ("(servConn): Failed to set SO_REUSEADDR on INET socket"); _exit (-1); } name.sin_family = AF_INET; name.sin_port = htons (port); name.sin_addr.s_addr = htonl(INADDR_ANY); if (bind (sd, (struct sockaddr *)&name, sizeof(name)) < 0) { perror ("(servConn): bind() error"); _exit (-1); } listen (sd, 5); //End of server Setup ////////////////////////////////////////////////// for (;;) { cli_len = sizeof (cli_name); new_sd = accept (sd, (struct sockaddr *) &cli_name, &cli_len); printf ("Assigning new socket descriptor: %d\n", new_sd); inc_clients(); //New client has come in, increment clients add_to_socket_pool(new_sd); //Add client to the pool of sockets if (new_sd < 0) { perror ("(servConn): accept() error"); _exit (-1); } } pthread_exit(NULL); //Quit } //Adds the new socket to the array designated for pthreads in the pool void add_to_socket_pool(int socket){ pthread_mutex_lock(&m_pool_busy); //Lock so that we can check main_pool_busy int i; //If not all our main pool is busy, then allocate to one of them if(main_pool_busy < worknum){ pthread_mutex_unlock(&m_pool_busy); //unlock busy, we no longer need to hold it pthread_mutex_lock(&pool_sockets); //Lock the socket pool array so that we can edit it without worry for(i = 0; i < worknum; i++){ //Find a poolsocket that is -1; then we should put the real socket there. This value will be changed back to -1 when the thread grabs the sockfd if(poolsockets[i] == -1){ poolsockets[i] = socket; pthread_mutex_unlock(&pool_sockets); //unlock our pool array, we don't need it anymore inc_busy(); //Incrememnt busy (locks the mutex itself) pthread_cond_signal(&clientin); //Signal first thread waiting on a client that a client needs to be serviced break; } } } else{ //Dynamic thread creation goes here pthread_mutex_unlock(&m_pool_busy); } } //Increments the client number. If client number goes over worknum, we must dynamically create new pthreads void inc_clients(){ pthread_mutex_lock(&numclients); clients++; pthread_mutex_unlock(&numclients); } //Increments busy void inc_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy++; pthread_mutex_unlock(&m_pool_busy); } //Initialize all of our mutexes at the beginning and create our pthreads void init_mutex_stuff(pthread_t * threads, pthread_t * dis){ pthread_mutex_init(&startservice, NULL); pthread_mutex_init(&numclients, NULL); pthread_mutex_init(&pool_sockets, NULL); pthread_mutex_init(&nonpool_sockets, NULL); pthread_mutex_init(&m_pool_busy, NULL); pthread_mutex_init(&circ, NULL); pthread_cond_init (&clientin, NULL); main_pool_busy = 0; poolsockets = malloc(sizeof(int)*worknum); int threadreturn; //error checking variables long i = 0; //Loop and create pthreads for(i; i < worknum; i++){ threadreturn = pthread_create(&threads[i], NULL, threadpool, (void *) i); poolsockets[i] = -1; if(threadreturn){ perror("Thread pool created unsuccessfully"); _exit(-1); } } pthread_create(&dis[0], NULL, dispatcher, NULL); } ////////////////////////////////////////////////////////////////////////////////////////// /////////Main pool routines ///////////////////////////////////////////////////////////////////////////////////////// void dec_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy--; pthread_mutex_unlock(&m_pool_busy); } void dec_clients(){ pthread_mutex_lock(&numclients); clients--; pthread_mutex_unlock(&numclients); } //This is what our threadpool pthreads will be running. void *threadpool(void * threadid){ long id = (long) threadid; //Id of this thread int i; int socket; int counter = 0; //Try and gain access to the next client that comes in and wait until server signals that a client as arrived while(1){ pthread_mutex_lock(&startservice); //lock start service (required for cond wait) pthread_cond_wait(&clientin, &startservice); //wait for signal from server that client exists pthread_mutex_unlock(&startservice); //unlock mutex. pthread_mutex_lock(&pool_sockets); //Lock the pool socket so we can get the socket fd unhindered/interrupted for(i = 0; i < worknum; i++){ if(poolsockets[i] != -1){ socket = poolsockets[i]; poolsockets[i] = -1; pthread_mutex_unlock(&pool_sockets); } } printf("Thread #%d is past getting the socket\n", id); int incoming = 1; while(counter < 100 && incoming != 0){ char buffer[512]; bzero(buffer,512); int startcounter = 0; incoming = read(socket, buffer, 512); if(buffer[0] != 0){ //client ID:priority:request:arguments char id[100]; long prior; char request[100]; char arg1[100]; char message[100]; char arg2[100]; char * point; point = strtok(buffer, ":"); strcpy(id, point); point = strtok(NULL, ":"); prior = atoi(point); point = strtok(NULL, ":"); strcpy(request, point); point = strtok(NULL, ":"); strcpy(arg1, point); point = strtok(NULL, ":"); if(point != NULL){ strcpy(arg2, point); } int fd; if(strcmp(request, "start_movie") == 0){ int count = 1; while(count <= 100){ char temp[10]; snprintf(temp, 50, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s to %s\n", message, id); count++; add_to_ring(message, id, prior, counter, socket); //Adds our created message to the ring counter++; } printf("I'm out of the loop\n"); } else if(strcmp(request, "seek_movie") == 0){ int count = atoi(arg2); while(count <= 100){ char temp[10]; snprintf(temp, 10, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s\n", message); count++; } } //create_message(id, socket, counter, buffer, message); //Creates our message from the input from the client. Stores it in buffer } else{ delete_socket_messages(socket); break; } } counter = 0; close(socket);//Zero out counter again } dec_clients(); //client serviced, decrement clients dec_busy(); //thread finished, decrement busy } //Creates a message void create_message(long threadid, int socket, int counter, char * buffer, char * message){ snprintf(message, strlen(buffer)+15, "%d:%d:%d:%s", threadid, socket, counter, buffer); } //Gets the socket from the message string (maybe I should just pass in the socket to another method) int socket_from_string(char * message){ char * substr1 = strstr(message, ":"); char * substr2 = substr1; substr2++; int occurance = strcspn(substr2, ":"); char sock[10]; strncpy(sock, substr2, occurance); return atoi(sock); } //Adds message to our ring buffer's head void add_to_head(char * message){ printf("Adding to head of ring\n"); mqueue.head->message = malloc(strlen(message)+1); //Allocate space for message strcpy(mqueue.head->message, message); //copy bytes into allocated space } //Adds our message to our ring buffer's tail void add_to_tail(char * message){ printf("Adding to tail of ring\n"); mqueue.tail->message = malloc(strlen(message)+1); //allocate space for message strcpy(mqueue.tail->message, message); //copy bytes into allocated space mqueue.tail->next = malloc(sizeof(struct message)); //allocate space for the next message struct } //Adds a message to our ring void add_to_ring(char * message, char * id, int prior, int mnum, int socket){ //printf("This is message %s:" , message); pthread_mutex_lock(&circ); //Lock the ring buffer pthread_mutex_lock(&numm); //Lock the message count (will need this to make sure we can't fill the buffer over the max slots) if(mqueue.head->message == NULL){ add_to_head(message); //Adds it to head mqueue.head->socket = socket; //Set message socket mqueue.head->priority = prior; //Set its priority (thread id) mqueue.head->mnum = mnum; //Set its message number (used for sorting) mqueue.head->id = malloc(sizeof(id)); strcpy(mqueue.head->id, id); } else if(mqueue.tail->message == NULL){ //This is the problem for dis_m 1 I'm pretty sure add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } else{ mqueue.tail->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.tail->next; add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } mqueue.mcount++; pthread_mutex_unlock(&circ); if(mqueue.mcount >= dis_m){ pthread_mutex_unlock(&numm); pthread_cond_signal(&m); } else{ pthread_mutex_unlock(&numm); } printf("out of add to ring\n"); fflush(stdout); } ////////////////////////////////// //Dispatcher routines ///////////////////////////////// void *dispatcher(){ init_ring(); while(1){ pthread_mutex_lock(&slots); pthread_cond_wait(&m, &slots); pthread_mutex_lock(&numm); pthread_mutex_lock(&circ); printf("Dispatcher to the rescue!\n"); mqueue.head = reorder(mqueue.head, mqueue.tail, mqueue.mcount); //printf("This is the head %s\n", mqueue.head->message); //printf("This is the tail %s\n", mqueue.head->message); fflush(stdout); struct message * pointer = mqueue.head; int count = 0; while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); printf("My dispatcher has defeated evil\n"); } } void init_ring(){ mqueue.head = malloc(sizeof(struct message)); mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.mcount = 0; } struct message * reorder(struct message * begin, struct message * end, int num){ //printf("I am reordering for size %d\n", num); fflush(stdout); int i; if(num == 1){ //printf("Begin: %s\n", begin->message); begin->next = NULL; return begin; } else{ struct message * left = begin; struct message * right; int middle = num/2; for(i = 1; i < middle; i++){ left = left->next; } right = left -> next; left -> next = NULL; //printf("Begin: %s\nLeft: %s\nright: %s\nend:%s\n", begin->message, left->message, right->message, end->message); left = reorder(begin, left, middle); if(num%2 != 0){ right = reorder(right, end, middle+1); } else{ right = reorder(right, end, middle); } return merge(left, right, num); } } struct message * merge(struct message * left, struct message * right, int num){ //printf("I am merginging! left: %s %d, right: %s %dnum: %d\n", left->message,left->priority, right->message, right->priority, num); struct message * start, * point; int lenL= 0; int lenR = 0; int flagL = 0; int flagR = 0; int count = 0; int middle1 = num/2; int middle2; if(num%2 != 0){ middle2 = middle1+1; } else{ middle2 = middle1; } while(lenL < middle1 && lenR < middle2){ count++; //printf("In here for count %d\n", count); if(lenL == 0 && lenR == 0){ if(left->priority < right->priority){ start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ start = right; point = right; right = right->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ ////printf("This is where we are\n"); start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ start = right; point = right; right = right->next; point->next = NULL; lenR++; } } } else{ if(left->priority < right->priority){ point->next = left; left = left->next; //move the left pointer point = point->next; point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ point->next = left; //set our enum; left = left->next; point = point->next;//move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } } } if(lenL == middle1){ flagL = 1; break; } if(lenR == middle2){ flagR = 1; break; } } if(flagL == 1){ point->next = right; point = point->next; for(lenR; lenR< middle2-1; lenR++){ point = point->next; } point->next = NULL; mqueue.tail = point; } else{ point->next = left; point = point->next; for(lenL; lenL< middle1-1; lenL++){ point = point->next; } point->next = NULL; mqueue.tail = point; } //printf("This is the start %s\n", start->message); //printf("This is mqueue.tail %s\n", mqueue.tail->message); return start; } void delete_socket_messages(int a){ }

    Read the article

  • PE Header Requirements

    - by Pindatjuh
    What are the requirements of a PE file (PE/COFF)? What fields should be set, which value, at a bare minimum for enabling it to "run" on Windows (i.e. executing "ret" instruction and then close, without error). The library I am building first is the linker: Now, the problem I have is the PE file (PE/COFF). I don't know what is "required" for a PE file before it can actually execute on my platform. My testing platform is Vista. I get an error message, saying "This is not a valid Win32 executable." when I execute it by double-clicking, and I get an "Access Denied." when executing it with CLI cmd. I have two sections, .text and .data. I've implemented the PE headers as provided by several online documents, i.e. MSDN and some other thirdparty documentation. If I use a hex-editor, it looks almost like a regular PE file. I don't use any imports, nor IAT, nor any directories in the PE header. Edit: I've added an import table, still not a valid .exe-file, says my Windows. I've tried to use values which are also mentioned at the smallest PE-file guide. No luck. Really the only thing I can't seem to figure out is what is required and what isn't. Some guides tell me everything is required, whilst others say about deprications: and it can be zero. I hope this is enough information. Thank you, in advance. Raw data (as requested) of current PE header: 4D 5A 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 40 00 00 00 50 45 00 00 4C 01 02 00 C8 7A 55 4B 00 00 00 00 00 00 00 00 E0 00 82 01 0B 01 0D 25 00 10 00 00 00 10 00 00 00 00 00 00 00 10 00 00 00 10 00 00 00 20 00 00 00 00 40 00 00 10 00 00 00 02 00 00 01 00 0B 00 00 00 00 00 03 00 0A 00 00 00 00 00 00 22 00 00 38 01 00 00 00 00 00 00 03 00 00 00 00 40 00 00 00 40 00 00 00 40 00 00 00 40 00 00 00 00 00 00 0E 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 24 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2E 74 65 78 74 00 00 00 00 00 00 00 00 10 00 00 00 02 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 20 00 00 60 2E 69 64 61 74 61 00 00 00 00 00 00 00 20 00 00 00 02 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 40 00 00 C0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3C 20 00 00 00 00 00 00 00 00 00 00 24 20 00 00 34 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4B 45 52 4E 45 4C 33 32 2E 64 6C 6C 00 00 00 00 01 00 00 80 00 00 00 00 01 00 00 80 00 00 00 00

    Read the article

  • Uploading multiple files using Spring MVC 3.0.2 after HiddenHttpMethodFilter has been enabled

    - by Tiny
    I'm using Spring version 3.0.2. I need to upload multiple files using the multiple="multiple" attribute of a file browser such as, <input type="file" id="myFile" name="myFile" multiple="multiple"/> (and not using multiple file browsers something like the one stated by this answer, it indeed works I tried). Although no versions of Internet Explorer supports this approach unless an appropriate jQuery plugin/widget is used, I don't care about it right now (since most other browsers support this). This works fine with commons fileupload but in addition to using RequestMethod.POST and RequestMethod.GET methods, I also want to use other request methods supported and suggested by Spring like RequestMethod.PUT and RequestMethod.DELETE in their own appropriate places. For this to be so, I have configured Spring with HiddenHttpMethodFilter which goes fine as this question indicates. but it can upload only one file at a time even though multiple files in the file browser are chosen. In the Spring controller class, a method is mapped as follows. @RequestMapping(method={RequestMethod.POST}, value={"admin_side/Temp"}) public String onSubmit(@RequestParam("myFile") List<MultipartFile> files, @ModelAttribute("tempBean") TempBean tempBean, BindingResult error, Map model, HttpServletRequest request, HttpServletResponse response) throws IOException, FileUploadException { for(MultipartFile file:files) { System.out.println(file.getOriginalFilename()); } } Even with the request parameter @RequestParam("myFile") List<MultipartFile> files which is a List of type MultipartFile (it can always have only one file at a time). I could find a strategy which is likely to work with multiple files on this blog. I have gone through it carefully. The solution below the section SOLUTION 2 – USE THE RAW REQUEST says, If however the client insists on using the same form input name such as ‘files[]‘ or ‘files’ and then populating that name with multiple files then a small hack is necessary as follows. As noted above Spring 2.5 throws an exception if it detects the same form input name of type file more than once. CommonsFileUploadSupport – the class which throws that exception is not final and the method which throws that exception is protected so using the wonders of inheritance and subclassing one can simply fix/modify the logic a little bit as follows. The change I’ve made is literally one word representing one method invocation which enables us to have multiple files incoming under the same form input name. It attempts to override the method protected MultipartParsingResult parseFileItems(List fileItems, String encoding) {} of the abstract class CommonsFileUploadSupport by extending the class CommonsMultipartResolver such as, package multipartResolver; import java.io.UnsupportedEncodingException; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.servlet.ServletContext; import org.apache.commons.fileupload.FileItem; import org.springframework.util.StringUtils; import org.springframework.web.multipart.MultipartException; import org.springframework.web.multipart.MultipartFile; import org.springframework.web.multipart.commons.CommonsMultipartFile; import org.springframework.web.multipart.commons.CommonsMultipartResolver; final public class MultiCommonsMultipartResolver extends CommonsMultipartResolver { public MultiCommonsMultipartResolver() { } public MultiCommonsMultipartResolver(ServletContext servletContext) { super(servletContext); } @Override @SuppressWarnings("unchecked") protected MultipartParsingResult parseFileItems(List fileItems, String encoding) { Map<String, MultipartFile> multipartFiles = new HashMap<String, MultipartFile>(); Map multipartParameters = new HashMap(); // Extract multipart files and multipart parameters. for (Iterator it = fileItems.iterator(); it.hasNext();) { FileItem fileItem = (FileItem) it.next(); if (fileItem.isFormField()) { String value = null; if (encoding != null) { try { value = fileItem.getString(encoding); } catch (UnsupportedEncodingException ex) { if (logger.isWarnEnabled()) { logger.warn("Could not decode multipart item '" + fileItem.getFieldName() + "' with encoding '" + encoding + "': using platform default"); } value = fileItem.getString(); } } else { value = fileItem.getString(); } String[] curParam = (String[]) multipartParameters.get(fileItem.getFieldName()); if (curParam == null) { // simple form field multipartParameters.put(fileItem.getFieldName(), new String[] { value }); } else { // array of simple form fields String[] newParam = StringUtils.addStringToArray(curParam, value); multipartParameters.put(fileItem.getFieldName(), newParam); } } else { // multipart file field CommonsMultipartFile file = new CommonsMultipartFile(fileItem); if (multipartFiles.put(fileItem.getName(), file) != null) { throw new MultipartException("Multiple files for field name [" + file.getName() + "] found - not supported by MultipartResolver"); } if (logger.isDebugEnabled()) { logger.debug("Found multipart file [" + file.getName() + "] of size " + file.getSize() + " bytes with original filename [" + file.getOriginalFilename() + "], stored " + file.getStorageDescription()); } } } return new MultipartParsingResult(multipartFiles, multipartParameters); } } What happens is that the last line in the method parseFileItems() (the return statement) i.e. return new MultipartParsingResult(multipartFiles, multipartParameters); causes a compile-time error because the first parameter multipartFiles is a type of Map implemented by HashMap but in reality, it requires a parameter of type MultiValueMap<String, MultipartFile> It is a constructor of a static class inside the abstract class CommonsFileUploadSupport, public abstract class CommonsFileUploadSupport { protected static class MultipartParsingResult { public MultipartParsingResult(MultiValueMap<String, MultipartFile> mpFiles, Map<String, String[]> mpParams) { } } } The reason might be - this solution is about the Spring version 2.5 and I'm using the Spring version 3.0.2 which might be inappropriate for this version. I however tried to replace the Map with MultiValueMap in various ways such as the one shown in the following segment of code, MultiValueMap<String, MultipartFile>mul=new LinkedMultiValueMap<String, MultipartFile>(); for(Entry<String, MultipartFile>entry:multipartFiles.entrySet()) { mul.add(entry.getKey(), entry.getValue()); } return new MultipartParsingResult(mul, multipartParameters); but no success. I'm not sure how to replace Map with MultiValueMap and even doing so could work either. After doing this, the browser shows the Http response, HTTP Status 400 - type Status report message description The request sent by the client was syntactically incorrect (). Apache Tomcat/6.0.26 I have tried to shorten the question as possible as I could and I haven't included unnecessary code. How could be made it possible to upload multiple files after Spring has been configured with HiddenHttpMethodFilter? That blog indicates that It is a long standing, high priority bug. If there is no solution regarding the version 3.0.2 (3 or higher) then I have to disable Spring support forever and continue to use commons-fileupolad as suggested by the third solution on that blog omitting the PUT, DELETE and other request methods forever. Just curiously waiting for a solution and/or suggestion. Very little changes to the code in the parseFileItems() method inside the class MultiCommonsMultipartResolver might make it to upload multiple files but I couldn't succeed in my attempts (again with the Spring version 3.0.2 (3 or higher)).

    Read the article

  • Apache 2 Virtual Hosts no working on OSX 10.6

    - by matt_lethargic
    This is my first MacBook and I'm trying to get virtual hosts up and running so as it's going to be my dev machine. I've got apache/php/mysql running fine, the problem is that what ever address I go to I just get one of the virtual hosts I've setup. I can't even get to the root site anymore. I had phpmyadmin setup on http://localhost/pma but now that comes up with an error. If I take out the vhosts config file it seems to work again. I've put all my configs I can think you'll need below. ############## httpd config ############# ServerRoot "/usr" Listen 80 LoadModule authn_file_module libexec/apache2/mod_authn_file.so LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so LoadModule authn_default_module libexec/apache2/mod_authn_default.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so LoadModule authz_default_module libexec/apache2/mod_authz_default.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so LoadModule cache_module libexec/apache2/mod_cache.so LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so LoadModule dbd_module libexec/apache2/mod_dbd.so LoadModule dumpio_module libexec/apache2/mod_dumpio.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so LoadModule substitute_module libexec/apache2/mod_substitute.so LoadModule deflate_module libexec/apache2/mod_deflate.so LoadModule log_config_module libexec/apache2/mod_log_config.so LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so LoadModule ident_module libexec/apache2/mod_ident.so LoadModule usertrack_module libexec/apache2/mod_usertrack.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so LoadModule ssl_module libexec/apache2/mod_ssl.so LoadModule mime_module libexec/apache2/mod_mime.so LoadModule dav_module libexec/apache2/mod_dav.so LoadModule status_module libexec/apache2/mod_status.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so LoadModule asis_module libexec/apache2/mod_asis.so LoadModule info_module libexec/apache2/mod_info.so LoadModule cgi_module libexec/apache2/mod_cgi.so LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so LoadModule imagemap_module libexec/apache2/mod_imagemap.so LoadModule actions_module libexec/apache2/mod_actions.so LoadModule speling_module libexec/apache2/mod_speling.so LoadModule userdir_module libexec/apache2/mod_userdir.so LoadModule alias_module libexec/apache2/mod_alias.so LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule bonjour_module libexec/apache2/mod_bonjour.so LoadModule php5_module libexec/apache2/libphp5.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User _www Group _www </IfModule> </IfModule> ServerAdmin [email protected] ServerName localhost:80 DocumentRoot "/Library/WebServer/Documents" <Directory /> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/Library/WebServer/Documents"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Order allow,deny Deny from all Satisfy All </FilesMatch> <Files "rsrc"> Order allow,deny Deny from all Satisfy All </Files> <DirectoryMatch ".*\.\.namedfork"> Order allow,deny Deny from all Satisfy All </DirectoryMatch> ErrorLog "/private/var/log/apache2/error_log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "/private/var/log/apache2/access_log" common </IfModule> <IfModule alias_module> ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" </IfModule> <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig /private/etc/apache2/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> TraceEnable off Include /private/etc/apache2/extra/httpd-mpm.conf Include /private/etc/apache2/extra/httpd-autoindex.conf Include /private/etc/apache2/extra/httpd-languages.conf Include /private/etc/apache2/extra/httpd-userdir.conf Include /private/etc/apache2/extra/httpd-vhosts.conf Include /private/etc/apache2/extra/httpd-manual.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf ############# httpd-vhosts ################ NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/Users/matt/Workspace/farmers-arms/website/farmers_arms" ServerName dev.farmers ServerAlias www.dev.farmers ErrorLog "/private/var/log/apache2/localhost.farmers-error_log" CustomLog "/private/var/log/apache2/localhost.farmers-access_log" common <Directory "/Users/matt/Workspace/farmers-arms/website/farmers_arms"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> Hosts file 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 dev.farmers 127.0.0.1 dev.hft Help!!!

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • Using nginx + wordpress with all wordpress files in a subdirectory

    - by GorillaPatch
    My setup I am running nginx 0.7.67 on Debian Lenny as a webserver, not as a reverse proxy. I am using php5-fpm to handle my PHP requests, which works fine. My aim I would like to have a wordpress installation that is layed out as described here clean wordpress subversion installation. I would like to have a clean wordpress installation without cluttering my server root directory with all the wordpress files. That means that my wordpress installation would be in /wordpress and my themes and plugins inside /wordpress-content. The important point however is that if you navigate to my domain www.example.com then you would be taken directly to the wordpress blog, without having to specify the subdirectory where wordpress lives. I found a how-to at the nginx site installing wordpress but unfortunately this is for moving the entire wordpress directory instead of redirecting the traffic to it. I tried with the following configuration: example.conf in sites-available server { listen 80; server_name www.example.com; access_log /var/log/nginx/www.example.com.access.log main; root /var/www/example/htdocs; location / { try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; } include /etc/nginx/includes/php5-wordpress.conf; include /etc/nginx/includes/deny.conf; } php5-wordpress.conf in includes location /wordpress { try_files $uri $uri/ /wordpress/index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_ignore_client_abort on; fastcgi_pass unix:/var/run/php5-fpm.socket; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } fastcgi_params fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; The problems I have is that when I go to the adress "http://www.example.com" I get a 403 error as I disabled directory listing. Instead I would like my wordpress to appear then. Also if I navigate to "http://www.example.com/wordpress" I get a "file not found" error. However if I comment out the fastcgi_split_path_info line in my php5-wordpress.conf at least the wordpress installation works inside /wordpress. I need help how to debug this behavior or where I can find more information. Thanks alot. Update: Added error log entry for the 403 error. in the error.log I get the following entry for the 403 error: 2010/12/11 07:54:24 [error] 9496#0: *1 directory index of "/var/www/example/htdocs/" is forbidden, client: XXX.XXX.XXX.XXX, server: www.example.com, request: "GET / HTTP/1.1", host: "www.example.com" Update 2: Added the nginx.conf below: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; index index.php index.html; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • localhost/phpmyadmin pulls blank page

    - by Atul Modi
    When I tried configuring local machine as a Internet Gateway with website development capabilities over it and I installed all required software into it. I also had disable the selinux into it. But PROBLEM is when I do http://localhost/phpMyAdmin or all lower case than the page shows it as a blank page. I am pasting code from httpd.conf file into this as well as from phpMyAdmin.conf file I am using Fedora 16 for this. httpd.conf ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule dbd_module modules/mod_dbd.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all UserDir disabled DirectoryIndex index.html index.htm index.php AccessFileName .htaccess Order allow,deny Deny from all Satisfy All TypesConfig /etc/mime.types DefaultType text/plain MIMEMagicFile conf/magic HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ # HEADER README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-tar .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .xml AddHandler application/x-httpd-php .xml AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en ForceLanguagePriority Prefer Fallback ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4.0" force-response-1.0 BrowserMatch "Java/1.0" force-response-1.0 BrowserMatch "JDK/1.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully Order allow,deny Allow from all # phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Can anyone help into this area please. Urgent reply will be appreciatable because i am struggling since one and half month for this matter. thank you, Atul

    Read the article

  • nginx can't load images,css,js

    - by EquinoX
    When I point to a URL in nginx where it has images extension such as: http://50.56.81.42/phpMyAdmin/themes/original/img/logo_right.png (as example) it gives me the 404 error as it can't find the file, but the file is actually there. What is potentially wrong? UPDATE: Here's the error log that I was able to pull out: 2011/02/27 05:53:29 [error] 18679#0: *225 open() "/usr/local/nginx/html/phpMyAdmin/js/mooRainbow/mooRainbow.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/js/mooRainbow/mooRainbow.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *226 open() "/usr/local/nginx/html/phpMyAdmin/print.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/print.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *228 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/logo_right.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/logo_right.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *223 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/b_help.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/b_help.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *227 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/s_warn.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/s_warn.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *227 open() "/usr/local/nginx/html/phpMyAdmin/favicon.ico" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/favicon.ico HTTP/1.1", host: "50.56.81.42" 2011/02/27 05:54:39 [error] 18679#0: *237 open() "/usr/local/nginx/html/phpMyAdmin/print.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/print.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *235 open() "/usr/local/nginx/html/phpMyAdmin/js/mooRainbow/mooRainbow.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/js/mooRainbow/mooRainbow.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *238 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/logo_right.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/logo_right.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *239 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/b_help.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/b_help.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *233 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/s_warn.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/s_warn.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *233 open() "/usr/local/nginx/html/phpMyAdmin/favicon.ico" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/favicon.ico HTTP/1.1", host: "50.56.81.42" Here's my nginx.conf file, in case I am missing something: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.(js|css|png|jpg|jpeg|gif|ico|html)$ { expires max; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } What does this mean? It can't pull out the .css, etc....

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >