Search Results

Search found 52214 results on 2089 pages for 'partial application'.

Page 12/2089 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Partial Classes - are they bad design?

    - by dferraro
    Hello, I'm wondering why the 'partial class' concept even exists in .NET. I'm working on an application and we are reading a (actually very good) book relavant to the development platform we are implementing at work. In the book he provides a large code base /wrapper around the platform API and explains how he developed it as he teaches different topics about the platform development. Anyway, long story short - he uses partial classes, all over the place, as a way to fake multiple inheritence in C# (IMO). Why he didnt just split the classes up into multiple ones and use composition is beyond me. He will have 3 'partial class' files to make up his base class, each w/ 3-500 lines of code... And does this several times in his API. Do you find this justifiable? If it were me, I'd have followed the S.R.P. and created multiple classes to handle different required behaviors, then create a base class that has instances of these classes as members (e.g. composition). Why did MS even put partial class into the framework?? They removed the ability to expand/collapse all code at each scope level in C# (this was allowed in C++) because it was obviously just allowing bad habits - partial class is IMO the same thing. I guess my quetion is: Can you explain to me when there would be a legitimate reason to ever use a partial class? I do not mean this to be a rant / war thread. I'm honeslty looking to learn something here. Thanks

    Read the article

  • AJAX Partial Rendering issues for the default page in IIS 7 when using custom http module

    - by WiseGuyEh
    The problem When I try to make a AJAX partial update request (using the UpdatePanel control) from the default page of an IIS7 web site, it fails- instead of returning the html to be updated, it returns the entire page, which then causes the MS AJAX Javascript to throw a parsing shit-fit. The suspected cause I have narrowed the cause down to two issues- making an AJAX request to the default page when I have a certain custom http module registered. A partial rendering request to http://localhost will fail, but a partial rendering request to http://localhost/default.aspx will work fine. Also, If i remove the following line in my custom HttpModule: _application.PreRequestHandlerExecute += OnPreRequestHandlerExecute; The AJAX partial render will work correctly. Wierd huh? Another wierd thing... If I look at trace.axd, I can see that when a partial rendering request fails, two POST requests are logged for the one partial rendering request- one where the default.aspx page executes successfully (trace information such as page_load is logged) but no content is produced and a second that doesn't seem to actually execute (no trace information is logged) but produces content (HTTP_CONTENT_LENGTH is greater than 0). Please help! If anyone with a good knowledge of HTTP modules or the MS AJAX Http module could explain why this is occuring I would be very grateful. As it is, the obvious work arround is to just redirect to default.aspx if the request url is "/" but I would really like to understand why this is occurring.

    Read the article

  • Where should I store my application data?

    - by joebeazelman
    I have an application that needs to store data. Currently, I am using the built-in Application Settings to do it, but it only gives me two choices: application and user scopes. Ideally, I want a "local" scope that allows the application to run under another user and still find its data rather than recreate it for that user. The application scope can do this, but it's read only. The application data will be changed by the user. It's OK if only the administrator is allowed to make changes to the data. As you probably can guess, I have an administration tool that allows the user to change the data and windows service runner that reads the data and does something with it. It would be great if the windows service runner access the data created by the administration tool.

    Read the article

  • Stand - alone application with JBoss or Tomcat

    - by sufoid
    Hallo, I have a more specific question about deploying a Java-application. I have created a Java application, it is a WAR file and can be installed on any Java application server. This works perfect. Now for users who do not have Java experience I want to package somehow my application together with the application server and distribute it as a stand-alone version. Question 1: Is this possible? Question 2: Which application server would be best for this? Question 3: Where should I start to learn how to do this? Do you have any experience you can share with me. Thanks.

    Read the article

  • Android Template Application?

    - by stormin986
    I have built an application that I want to use as the foundation for a few other variants. The variants will come from assets / resource files and a unique AndroidManifest.xml. However, I want to be able to leave all the application code alone (modifying the package of all my classes, etc). I'm having a hard time figuring out how to do so. My first thought was to simply have my main application in its own package, and then specify the specific application package in the manifest. However, this gives me issues with the generated R.java class, since it is generated to be in the main application's package. Anyone have any thoughts on how to accomplish this? To have a code baseline, and the application variants happen in resources/assets and the manifest?

    Read the article

  • IIS, multiple CPU cores, application pools and worker processes - best configuration for a single si

    - by Egghead Design
    Hi We use Kentico CMS and I've exchanged emails with them about a web garden deployment. We have a single site running on a server with 8 cpu cores. In line with Kentico's advice, we have not altered the application pool web garden setting from the default i.e. it is set to a maximum number of worker processes of 1. Our experience is that the site only uses one of the cpu cores - the others are idling. When I emailed them about this, their response was that the OS/IIS would handle this and use other cores as necessary even though the application pool only has a single worker process. Now, I've a lot of respect for the guys at Kentico, but this doesn't seem right to me? Surely, if we want to use all cores, we need to permit eight worker processes (and implement session state storage in SQL server)? Many thanks Tony

    Read the article

  • .NET Framework version in Application Pools of IIS 7 on windows 2008

    - by Rodnower
    Hello, I have web service on IIS 7 on Windows 2008. This web service must dlls of .NET Framework 3.5 (I have error about System.Linq using when I try to browse the web site) The only place I found where it is possible to change .NET Framework version is application pools management, but The only two options I have are: no management code and .NET Framework 2. In add/remove programs I have .NET Framework 3.5 installed and event does to it repair and iisreset, but I still have only to options in application pools management. Any ideas? Thank you for ahead.

    Read the article

  • JavaMail application won't send email to external SMTP server

    - by Luiz Cruz
    This is actually a question from an exam, but I believe it could help others troubleshooting a similar situation. In a system, an e-mail needs to be sent to a certain mailbox. The following Java code, which is part of a larger system, was developed for that. Assume that "example.com" corresponds to a valid registered internet domain. public void sendEmail(){ String s1=”Warning”; String b1=”Contact IT support.”; String r1=”[email protected]”; String d1=”[email protected]”; String h1=”mx.intranet”; Properties p1 = new Properties(); p1.put(“mail.host”, h1); Session session = Session.getDefaultInstance(p1, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(r1)); message.addRecipient(Message.RecipientType.TO, new InternetAddress(d1)); message.setSubject(s1); message.setText(b1); Transport.send(message); } catch (MessagingException e){ System.err.println(e); } } The execution of this code, within the testing environment of an application server, does NOT work as expected. The mailbox of the "example.com" server never receives the email, even tough all string values in the code are correctly attributed. The output for the command "netstat -np TCP" in the application server during execution is shown bellow: Src Add Src Port Dest Add Dest Port State 192.168.5.5 54395 192.168.7.1 25 SYN_SENT 192.168.5.5 54390 192.168.7.1 110 TIME_WAIT 192.168.5.5 52001 200.218.208.118 80 CLOSE_WAIT 192.168.5.5 52050 200.218.208.118 80 ESTABLISHED 192.168.5.5 50001 200.255.94.202 25 TIME_WAIT 192.168.5.5 50000 200.255.94.202 25 ESTABLISHED With the exception of the lines that were NAT'd, all others are associated with the Java application server, which created them after the execution of the code above. The e-mail server used in this environment is the production server, which is online and does not require any authentication for internal connections. Based on this situation, point out three possible causes for the problem.

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Setting jQuery after ASP.net AJAX partial post back

    - by Steve Clements
    OK, so for some reason you have a mega mashup solution with ASP.net AJAX, jQuery and web forms.  Perhaps you are just on the migration from AjaxControlToolkit to the jQuery UI framework – who knows!! Anyway, the problem is that when you post back with something like an UpdatePanel, you will find that your nicely setup jQuery stuff, like the datepicker for example will no longer work. You may have something like this… $(document).ready(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   When you’re ASP.net UpdatePanel post back, you will find that your datepicker has gone.  Bugger! Well you need to add this little gem to set it back up again once the UpdatePanel comes back to the page. var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   Or like me, you would have a javascript function, something like InitPage(); do all your work in there and call that on document.ready and endRequest. Your choice…you have the power   Share this post :

    Read the article

  • ASP -response-flush-flushes-partial-data

    - by Anshu
    I am developing a web app with an ASP server side and I use an iframe for data push. An ASP handler flushes every once in a while some javascript to the iframe: context.Response.Write("<script language='javascript'>top.update('lala');</script>"); context.Response.Flush(); My problem is that sometimes, when I receive the data, I don't get the full text. For example I will receive this : update('lala'); One workaround I have is to have a thread flushing '..........' every 500ms. (Then I will receive script...... which will complete my javascript.) However I am sure there must be a way to have Response.Flush() sending the whole chunk of data. Does someone have an idea on how to use properly Response.Flush() ? Thank you!

    Read the article

  • Disable .net completely in a IIS6 Application Pool

    - by David L.-Pratte
    we're managing some web sites for our clients on our servers, some running Windows Server 2003 R2 and others running 2008 R2. In Windows Server 2008 R2, we can disable completely .NET framework usage for some application pools, which is great since most of our websites are still using classic ASP. After some issues with classic ASP applications being configured to run as ASP.NET 4 in a CLR 2.0 pool, we wanted to do the same thing in IIS6 - that is, have application pools without any .NET support. Is this a supported scenario in IIS6? Thanks

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • Ubuntu 12.04 partial freeze

    - by user1550594
    While working on the system suddenly if i open a random file or start browsing and on other random moments the system starts hanging or start responding slowly. In this situation, i can able to open the virtual console using the Ctrl + Alt + F1-F6 and type the top and list the processes. Due to urgency, i will kill the long running processes or the over memory utilizing process to recover the screen back. Can anyone give me a permanent fix.!

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Installing .NET application on IIS 7.5 issues

    - by Juw
    Really need some help here. I am at a loss. I am trying to install a webservice that some other guy wrote in .NET. I have some basic IIS understanding. The webservice works just fine on my dev computer. But now i try to move the webservice to a production server and bad things happens. The webservice has been located in C:\inetpub\wwwroot\ dir on the dev server. But on this production server it is to be located in D:\services\ I have managed to install an application on the production server and everything seems fine and dandy. But when i "Test Settings" in the initial setup i get "Invalid application path" error. But i can just close it down and still install it. But when i try to access the webservice with: http://myserver.com/webservice/GetData nothing happens. Just a blank page and when i check the response headers...500 error. I don´t know what is going on here or where the problem is. I post the config file here so someone hopefully might notice something odd. Thanx in advance! EDIT: The config file is from my dev server. I just copied it to my production server...but that obviously didn´t work :-) UPDATE: I noticed that my dev server run in an Application pool with Net 4 and in "classic" "mode". On the production server it was in NET 4 but in "integrated" mode. So i changed it to "classic". I still get a blank page. But checking the log will output this: 2012-10-03 14:57:00 ip removed GET /boo/GetData - 80 - ip removed Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:15.0)+Gecko/20100101+Firefox/15.0.1 404 2 1260 203 <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <identity impersonate="true" /> <!-- Impersonate NT AUTHORITY/IUSR --> <compilation targetFramework="4.0"> <assemblies> <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b7735c561131e089" /> </assemblies> </compilation> <pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <httpErrors existingResponse="PassThrough" /> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> <directoryBrowse enabled="false" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <!-- Configure the WCF REST service base address via the global.asax.cs file and the default endpoint via the attributes on the <standardEndpoint> element below --> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/DataModel.csdl|res://*/DataModel.ssdl|res://*/DataModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=someip;initial catalog=db_90;User ID=user1;Password=access2;multipleactiveresultsets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>

    Read the article

  • How to setup a Zend_Application with an application.ini and a user.ini

    - by Peter Smit
    I am using Zend_Application and it does not feel right that I am mixing in my application.ini both application and user configuration. What I mean with this is the following. For example, my application needs some library classes in the namespace MyApp_ . So in application.ini I put autoloaderNamespaces[] = "MyApp_". This is pure application configuration, no-one except a programmer would change these. On the other hand I put there a database configuration, something that a SysAdmin would change. My idea is that I would split options between an application.ini and an user.ini, where the options in user.ini take preference (so I can define standard values in application.ini). Is this a good idea? How can I best implement this? The idea's I have are Extending Zend_Application to take multiple config files Making an init function in my Bootstrap loading the user.ini Parsing the config files in my index.php and pass these to Zend_Application (sounds ugly) What shall I do? I would like to have the 'cleanest' solution, which is prepared for the future (newer ZF versions, and other developers working on the same app)

    Read the article

  • Using Client Application Services in windows forms not working

    - by Nickson
    i am trying to implement asp.net membership, profile and role based security in a windows application by configuring client Application Services for my windows forms application. I have followed both these articles http://www.dotnetbips.com/articles/e863aa3c-0dd6-468d-bd35-120a334c5030.aspx and http://msdn.microsoft.com/en-us/library/bb546195.aspx step-by-step but for some reason i can't get the authentication working. I have a deployed intranet asp.net website which is already using an asp.net membership database for authentication and want to use that same database for authenitcation in my windows forms application. The site URL is http://myServer_Name:My_Port and i am specifying that URL as the both the Authentication service location and Roles service location in the windows application services property tab. But in the windows application login form, when i say Dim msg As String = "Welcome " If Not Membership.ValidateUser(UsernameTextBox.Text), PasswordTextBox.Text)) Then MessageBox.Show("Invalid User ID or Password!") Else msg = msg + UsernameTextBox.Text End If i get my "Invalid User ID or Password!" message even when i supply a valid user name with the corresponding password. i am able to login with the same credentials from the asp.net site. How can i test if the Authentication service location is being reached from the windows application?? Or what other information can i provide here such that one is able to help me get this working??

    Read the article

  • Running an MVC Application as a Sub-Application?

    - by ZafarYousafi
    Hi, I am facing problem in creating an mvc application as sub-application of the asp.net application. My Mvc application is doing fine in development environment and even when it is deployed normally. However whenever I tried to deploy it as a sub-application of an asp.net application like http://mainapplication/mvcsubapplication I got an error The view 'Index' or its master could not be found. The following locations were searched: ~/Views/Home/Index.aspx ~/Views/Home/Index.ascx ~/Views/Shared/Index.aspx ~/Views/Shared/Index.ascx There is no problem in view naming since application is well tested in development environment. It only happens when I tried to deploy it as sub-app. remember I am deploying on a server with IIS 7.x installed on it. Any response will be appreciated. Thanx

    Read the article

  • IIS 6.0 Application pool crash

    - by David
    One application pool on one of our webservers crashed and we found this in the Eventlog, where can we find more information about it? Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1101 Date: 11/23/2009 Time: 10:57:55 AM User: N/A Computer: ID-WEB Description: The World Wide Web Publishing Service failed to create app pool 'Global'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: b7 00 07 80 ·..? Attempting to manually start the application pool gives the following in the event log: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1107 Date: 11/23/2009 Time: 3:53:13 PM User: N/A Computer: ID-WEB Description: The World Wide Web Publishing Service failed to modify app pool 'Global'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 05 40 00 80 .@.? We are running IIS 6.0 on a Windows server 2003 R2, 32bits.

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >