Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 481/632 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Axis2 SOAP Envelope Header Information

    - by BigZig
    I'm consuming a web service that places an authentication token in the SOAP envelope header. It appears (through looking at the samples that came with the WS WSDL) that if the stub is generated in .NET, this header information is exposed through a member variable in the stub class. However, when I generate my Axis2 java stub using WSDL2Java it doesn't appear to be exposed anywhere. What is the correct way to extract this information from the SOAP envelope header? WSDL: http://www.vbar.com/zangelo/SecurityService.wsdl C# Sample: using System; using SignInSample.Security; // web service using SignInSample.Document; // web service namespace SignInSample { class SignInSampleClass { [STAThread] static void Main(string[] args) { // login to the Vault and set up the document service SecurityService secSvc = new SecurityService(); secSvc.Url = "http://localhost/AutodeskDM/Services/SecurityService.asmx"; secSvc.SecurityHeaderValue = new SignInSample.Security.SecurityHeader(); secSvc.SignIn("Administrator", "", "Vault"); DocumentServiceWse docSvc = new DocumentServiceWse(); docSvc.Url = "http://localhost/AutodeskDM/Services/DocumentService.asmx"; docSvc.SecurityHeaderValue = new SignInSample.Document.SecurityHeader(); docSvc.SecurityHeaderValue.Ticket = secSvc.SecurityHeaderValue.Ticket; docSvc.SecurityHeaderValue.UserId = secSvc.SecurityHeaderValue.UserId; } } } The sample illustrates what I'd like to do. Notice how the secSvc instance has a SecurityHeaderValue member variable that is populated after a successful secSvc.SignIn() invocation. Here's some relevant API documentation regarding the SignIn method: Although there is no return value, a successful sign in will populate the SecurityHeaderValue of the security service. The SecurityHeaderValue information is then used for other web service calls.

    Read the article

  • JasperReports reports access image via authenticated URL

    - by user363115
    Hi all, I am hoping that this is a simple issue with a simple solution and that I have missed something obvious. Let me explain the problem; We have an application that generates PDF reports (using Jasper). These reports contain data from our database, as well as imagery (photographs). These photographs are stored in S3. We use signed URLs to access these photographs. We link these photographs into our Jasper reports using these S3 URLs. Because the S3 URLs are signed and time-limited (by design), the process is as follows; User requests a report to be generated, Report is filled, and goes to our database (at which time UUIDs to any required images are retrieved), For each UUID an S3 signed URL must be generated, To do this the URL behind each report image is a call to an authenticated URL in our app (/get_img?uuid=foo), The controller behind this URL generates a signed S3 URL and returns it, Reports loads the image. The problem is with step (4) - the call to the authenticated URL fails because Jasper does not pass any authentication information with the request. Is there a solution here? Thanks all for your time. Ben

    Read the article

  • DotNetOpeId - Using OpenIdButton for Google Apps

    - by JediPotPie
    I am an ASP.NET newbie and I am trying to design an OpenID/SSO system for an internal web application. The web application is pretty simple and the authentication is currently being managed by a database with usernames and passwords. I want to replace the existing accounts stored in the database with Google Apps accounts. I have downloaded the latest DotNetOpenAuth-3.4.3.10103 package and got the OpenIdRelyingPartyWebForms sample up and running on IIS. I have built my own login page using just a OpenIdButton object that points to a development Google domain. The button seems to work fine in FireFox, at least it is forwarding me to the Google Apps login, but nothing happens when I load the same page in IE. When I click the Google button, nothing happens, zip. The same is true for the Yahoo button in the login.aspx page given in the sample. Here is the .aspx code I am using... <rp:OpenIdButton runat="server" ImageUrl="http://www.google.com/accounts/google_transparent.gif" Text="Login with Google!" ID="googleLoginButton" Identifier="https://www.google.com/accounts/o8/site-xrds?hd=dev.connexcloud.com" />

    Read the article

  • Firefox proxy dilemma

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • sql server mdf file database attachment

    - by jnsohnumr
    hello all i'm having a bear of a time getting visual studio 2010 (ultimate i think) to properly attach to my database. it was moved from original spot to #MYAPP#/#MYAPP#.Web/App_Data/#MDF_FILE#.mdf. I have three instances of sql server running on this machine. i have tried to replace the old mdf file with my new one and cannot get the connectionstring right for it. what i'm really wanting to do is to just open some DB instance, run a DB create script. Then I can have a DB that was generated via my edmx (generate database from model) in silverlight business application (c#) right now, when i go to server explorer in VS, choose add new connection, choose MS SQL Server Database FIle (SqlClient), choose my file location (app_data directory), use windows authentication, and hit the Test Connection button I get the following error: Unable to open the physical file "". Operating system error 5: "5(Access Denied.)". An attempt to attach to an auto-named database for file"" failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. The mdf file was created on the same machine by connecting to (local) on the sql server management studio, getting a new query, pasting in the SQL from the generated ddl file, adding a CREATE DATABASE [NcrCarDatabase]; GO before the pasted SQL, and executing the query. I then disconnected from the DB in management studio, closed management studio, navigated to the DATA directory for that instance, and copying the mdf and ldf files to my application's app_data folder. I am then trying to connect to the same file inside visual studio. I hope that gives more clarity to my problems :). Connection string is: Data Source=.\SQLEXPRESS;AttachDbFilename=C:\SourceCode\NcrCarDatabase\NcrCarDatabase.Web\App_Data\NcrCarDatabase.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True

    Read the article

  • Firefox proxy delima

    - by Mike L.
    Any idea why when using system proxy settings in firefox, it can not accept a proxy such as: user:[email protected]:port ??? IE will allow and connect to a proxy in this format. Not only does firefox not work, but it does not prompt for the password, nor attempt to make a connection to the proxy. Basically get a "proxy server not found" error. Anybody know a way around this? I am working on a proxy switching program for IE & Firefox, and I would like to use system-wide proxy settings. If I just store the server:port combination, firefox prompts for the password, as well as IE. Then they can be cached and it will not ask again. Maybe my only option is to programmatically cache the user/pass? Anybody know a way to do this? I am pretty sure IE stores them at HTTP basic authentication passwords and I can add them with AddCredential. After saving a password for a proxy in firefox, it shows up in saved passwords in a format like "moz-proxy://server:port" anybody know how to programmatically add a saved password to firefox? Thanks

    Read the article

  • what is a root directory in IIS 6 and How do I make one of my subfolder in ASP.NET website the root directory?

    - by R_Coder
    I need to integrate a third party plugin in my asp.net website. To install the plugin, they have mentioned this sentence, "Create an application through your IIS control panel with root directory at -(some path from my website folder)?". I am not much aware with IIS and rarely worked with it. Though I tried every possible way i could do in IIS, I am not able to work it out. After installation, there is a test page provided by plugin which i have to run to check but when I run it, it shows this error. "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS." I searched this error too and found that it is because the two Web.Config file, one from the main project and another from plugin folder. The only way to work with this is to make the plugin folder they specified as root directory in IIS. Someone kindly tell me some easy steps to do this. What I was doing is, in IIS6, I added New website with the main folder of my asp.net website, then I right clickadd application and choosed the gievn path, thought it would become root directory but it ain't. Help would be appreciated. ALso note that, i have to put the plugin folder in my main website folder only. So, there are two web.config. I tried to rename one of them too, it solved the above error but gave another errors but I think main problem is of root directory. P.S they show me above error on web.config file of plugin folder on this sentence- "Line 51: < authentication mode="Windows" />"

    Read the article

  • How can I use a custom configured RememberMeAuthenticationFilter in spring security?

    - by Sebastian
    I want to use a slightly customized rememberme functionality with spring security (3.1.0). I declare the rememberme tag like this: <security:remember-me key="JNJRMBM" user-service-ref="gymUserDetailService" /> As I have my own rememberme service I need to inject that into the RememberMeAuthenticationFilter which I define like this: <bean id="rememberMeFilter" class="org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter"> <property name="rememberMeServices" ref="gymRememberMeService"/> <property name="authenticationManager" ref="authenticationManager" /> </bean> I have spring security integrated in a standard way in my web.xml: <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> Everything works fine, except that the RememberMeAuthenticationFilter uses the standard RememberMeService, so I think that my defined RememberMeAuthenticationFilter is not being used. How can I make sure that my definition of the filter is being used? Do I need to create a custom filterchain? And if so, how can I see my current "implicit" filterchain and make sure I use the same one except my RememberMeAuthenticationFilter instead of the default one? Thanks for any advice and/or pointers!

    Read the article

  • How do you submit an authenticated HTML form using XUL (Firefox extension) Javascript?

    - by machineghost
    I am working on a Firefox extension, and in that extension I am trying to use AJAX to submit a form on a webpage. I am using: var request = Components.classes["@mozilla.org/xmlextras/xmlhttprequest;1"].createInstance(Components.interfaces.nsIXMLHttpRequest); request.onload = loadHandler; request.open("POST", url, true); request.send(values); to make the request, and it works ... mostly. The one problem is that the form has an authentication token on it, and I need to submit that token with my POST. I tried doing a GET separately to get this token, but by the time I made my second (POST) request my session had (evidently) changed, and the authenticity token was considered invalid. Does anyone know of a way to use the XUL/Chrome Javscript to maintain a constant session across multiple requests (all "behind the scenes") for something this? I'm still a XUL n00b, so there may be a totally obvious alternative that I'm missing (eg. hidden IFRAME; I tried that briefly but couldn't get it to work).

    Read the article

  • Sorcery/Capybara: Cannon log in with :js => true

    - by PlankTon
    I've been using capybara for a while, but I'm new to sorcery. I have a very odd problem whereby if I run the specs without Capybara's :js = true functionality I can log in fine, but if I try to specify :js = true on a spec, username/password cannot be found. Here's the authentication macro: module AuthenticationMacros def sign_in user = FactoryGirl.create(:user) user.activate! visit new_sessions_path fill_in 'Email Address', :with => user.email fill_in 'Password', :with => 'foobar' click_button 'Sign In' user end end Which is called in specs like this: feature "project setup" do include AuthenticationMacros background do sign_in end scenario "creating a project" do "my spec here" end The above code works fine. However, IF I change the scenario spec from (in this case) scenario "adding questions to a project" do to scenario "adding questions to a project", :js => true do login fails with an 'incorrect username/password' combination. Literally the only change is that :js = true. I'm using the default capybara javascript driver. (Loads up Firefox) Any ideas what could be going on here? I'm completely stumped. I'm using Capybara 2.0.1, Sorcery 0.7.13. There is no javascript on the sign in page and save_and_open_page before clicking 'sign in' confirms that the correct details are entered into the username/password fields. Any suggestions really appreciated - I'm at a loss.

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • implementing a Intelligent File Transfer Software in java over TCP/IP

    - by whyjava
    Hello I am working on a proposal where we have to implement a software which can move files between one source to destination.The overall goal of this project is to create intelligent file transfer.This software will have three components :- 1) Broker : Broker is the module that communicates with other brokers, monitors files, moves files, retrieves configurations from the Configuration Manager, supplies process information for the monitor, archives files, writes all process data to log files and escalates issues if necessary 2) Configuration Manager :Configuration Manager is a web-based application used to configure and deploy the configuration to all brokers. 3) Monitor : Monitor is a web-based application used to monitor each Broker in the environment. This project has to be built up in java and protocol for file transfer in tcp/ip. Client does not want to use FTP. File Transfer seems very easy, until there are several processes who are waiting to pick the file up automatically. Several problems arise: How can we guarantee the file is received at the destination? If a file isn’t received the first time, we should try it again (even after a restart or power breakdown) ? How does the receiver knows the file that is received is complete? How can we transfer multiple files synchronously? How can we protect the bandwidth, so file transfer isn’t blocking other processes? How does one interoperate between multiple OS platforms? What about authentication? How can we monitor het workflow? Auditing / logging Archiving Can you please provide answer to some of these? Thanks

    Read the article

  • Creating a HTTP handler for IIS that transparently forwards request to different port?

    - by Lasse V. Karlsen
    I have a public web server with the following software installed: IIS7 on port 80 Subversion over apache on port 81 TeamCity over apache on port 82 Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible. However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering... Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee? In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want. If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler. Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • Using awk to return only certain chunks of data

    - by Koriar
    I'm not 100% certain how to phrase my question simply, so I apologize if this has been answered somewhere and I was just unable to find it. What I have are debug logs with authentication packets in them along with a bunch of other output. I need to search through about 2 million lines of logs to find every packet that contains a certain mac address. The packets look something like this (slightly censored): -----------------[ header ]----------------- Event: Authd-Response (1900) Sequence: -54 Timestamp: 1969-12-31 19:30:00 (0) ---------------[ attributes ]--------------- Auth-Result = Auth-Accept Service-Profile-SID = 53 Service-Profile-SID = 49 RADIUS-Access-Accept-Attr/WiMAX-Capability = 0x(numbers) Session-Timeout = 3600 Service-Profile-SID = 4 Service-Profile-SID = 29 Chargeable-User-Identity = "(Numbers)" User-Password = "(the MAC address I'm looking for)" -------------------------------------------- However there are about 10 different possible types with different possible lengths. They all start with the header line and end with the all-dashes line. I've had success using awk to get the code blocks themselves using this: awk '/-----------------\[ header \]-----------------/,/--------------------------------------------/' filename.txt But I was hoping to be able to use it to return only the packets which contain the MAC address that I need. I've been trying to figure this out for a few days now and I'm pretty stuck. I could try and write a bash script, but I could swear that I've used awk to do something like this before...

    Read the article

  • Can i change the identity of the logged in user in ASP.net?

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work?

    Read the article

  • How would a user stay logged in to a REST-based website?

    - by unforgiven3
    A year or so ago I asked this question: Can you help me understand this? “Common REST Mistakes: Sessions are irrelevant”. My question was essentially this: Okay, I get that HTTP authentication is done automatically on every message - but how? Is the username/password sent with every request? Doesn't that just increase attack surface area? I feel like I'm missing part of the puzzle. The answers I received made perfect sense in the context of a mobile (iPhone, Android, WP7) app - when talking to a REST service, the app would just send user credentials along with each request. That worked great for me. But now, I would like to better understand how one would secure a REST-like website, like StackOverflow itself or something like Reddit. How would things work if it was a user logged in via a web browser instead of logged in via an iPhone app? What happens when a user logs in? Are the credentials saved in the browser somehow? How would the browser know what credentials to send with subsequent REST requests? What if it's a JavaScript call to a webservice? How would the JavaScript call include user credentials? I'll be quite frank: my understanding of security when it comes to websites is pretty limited. I enjoyed working with REST services from an app perspective, but now I want to try and build a website that is based on REST principles, and I'm finding myself to be pretty lost. If there is anything in the above question that is unclear that you'd like me to clarify, please leave a comment and I'll address it.

    Read the article

  • Can't serve HTML5 video through PHP on Safari/Mac (5.0)

    - by JKS
    I'm encountering a strange bug in Safari where, when I serve MP4 video through PHP (to obfuscate the file beneath the document root with a token-based authentication system), Safari for some reason fires the <video>'s onerror event, and the video never loads (I can't get any useful information out of the event object sent to onerror — everything is undefined). When I access the PHP script directly (i.e., the video is not embedded in a page), the video controls appear momentarily before flashing to a QuickTime question mark. When I access the MP4 file directly, it works as expected. What's bizarre is that the embedded video works perfectly in the latest version of Chrome for Mac. Here are the headers when accessed through PHP: Connection:Keep-Alive Content-Disposition:inline; filename="test.mp4" Content-Length:5558749 Content-Type:video/mp4 Date:Tue, 22 Jun 2010 01:24:25 GMT Keep-Alive:timeout=10, max=29 Server:Apache/2.2.15 (CentOS) mod_ssl/2.2.15 0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By:PHP/5.2.13 And here are the headers when test.mp4 is accessed directly: Accept-Ranges:bytes Connection:Keep-Alive Content-Length:5558749 Content-Type:video/mp4 Date:Tue, 22 Jun 2010 01:26:45 GMT Etag:"1c04757-54d1dd-489944c5a6400" Keep-Alive:timeout=10, max=30 Last-Modified:Tue, 22 Jun 2010 01:25:36 GMT Server:Apache/2.2.15 (CentOS) mod_ssl/2.2.15 0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 The only differing headers are: Accept-Ranges (which I don't think is necessary), Etag, Last-Modified, Content-Disposition, and X-Powered-By. Not only can Chrome handle the PHP-served video fine, but when I use the same script to load the MP4 through a Flash player, it also works fine. I just can't figure out what Safari is choking on. EDIT: Also, when I change the content disposition to "attachment", Safari will download the MP4 file just fine.

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • How to get at JSON in grails 2.0

    - by Mikey
    I am sending myself JSON like so with jQuery: $.ajax ({ type: "POST", url: 'http://localhost:8080/myproject/myController/myAction', dataType: 'json', async: false, //json object to sent to the authentication url data: {"stuff":"yes", "listThing":[1,2,3], "listObjects":[{"one":"thing"},{"two":"thing2"}]}, success: function () { alert("Thanks!"); } }) I send this to a controller and do println params And I know I'm already in trouble... [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:myAction, controller:myController] I cannot figure out how to get at most of these values... I can get "yes" with params.stuff, but I cant do params.listThing.each{} or params.listObjects.each{} What am I doing wrong? UPDATE: I make the controller do this to try the two suggestions so far: println params println params.stuff println params.list('listObjects') println params.listThing def thisWontWork = JSON.parse(params.listThing) render("omg l2json") look how weird the parameters look at the end of the null pointer exception when I try the answers: [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:l2json, controller:rateAPI] yes [] null | Error 2012-03-25 22:16:13,950 ["http-bio-8080"-exec-7] ERROR errors.GrailsExceptionResolver - NullPointerException occurred when processing request: [POST] /myproject/myController/myAction - parameters: stuff: yes listObjects[1][two]: thing2 listObjects[0][one]: thing listThing[]: 1 listThing[]: 2 listThing[]: 3 UPDATE 2 I am learning things, but this can't be right: println params['listThing[]'] println params['listObjects[0][one]'] prints [1, 2, 3] thing It seems like this is some part of grails new JSON marshaling. This is somewhat inconvenient for my purposes of hacking around with the values. How would I get all these individual params back into a big groovy object of nested maps and lists? Maybe I am not doing what I want with jQuery?

    Read the article

  • Need Help with .NET OpenId HttpHandler

    - by Mark E
    I'm trying to use a single HTTPHandler to authenticate a user's open id and receive a claimresponse. The initial authentication works, but the claimresponse does not. The error I receive is "This webpage has a redirect loop." What am I doing wrong? public class OpenIdLogin : IHttpHandler { private HttpContext _context = null; public void ProcessRequest(HttpContext context) { _context = context; var openid = new OpenIdRelyingParty(); var response = openid.GetResponse(); if (response == null) { // Stage 2: user submitting Identifier openid.CreateRequest(context.Request.Form["openid_identifier"]).RedirectToProvider(); } else { // Stage 3: OpenID Provider sending assertion response switch (response.Status) { case AuthenticationStatus.Authenticated: //FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); string identifier = response.ClaimedIdentifier; //****** TODO only proceed if we don't have the user's extended info in the database ************** ClaimsResponse claim = response.GetExtension<ClaimsResponse>(); if (claim == null) { //IAuthenticationRequest req = openid.CreateRequest(identifier); IAuthenticationRequest req = openid.CreateRequest(Identifier.Parse(identifier)); var fields = new ClaimsRequest(); fields.Email = DemandLevel.Request; req.AddExtension(fields); req.RedirectingResponse.Send(); //Is this correct? } else { context.Response.ContentType = "text/plain"; context.Response.Write(claim.Email); //claim.FullName; } break; case AuthenticationStatus.Canceled: //TODO break; case AuthenticationStatus.Failed: //TODO break; } } }

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • Do I need to Salt and Hash a randomly generated token?

    - by wag2639
    I'm using Adam Griffiths's Authentication Library for CodeIgniter and I'm tweaking the usermodel. I came across a generate function that he uses to generate tokens. His preferred approach is to reference a value from random.org but I considered that superfluous. I'm using his fall back approach of randomly generating a 20 character long string: $length = 20; $characters = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; $token = ''; for ($i = 0; $i < $length; $i++) { $token .= $characters[mt_rand(0, strlen($characters)-1)]; } He then hashes this token using a salt (I'm combing code from different functions) sha1($this->CI->config->item('encryption_key').$str); I was wondering if theres any reason to to run the token through the salted hash? I've read that simply randomly generating strings was a naive way of making random passwords but is the sh1 hash and salt necessary? Note: I got my encryption_key from https://www.grc.com/passwords.htm (63 random alpha-numeric)

    Read the article

  • How can I debug a session

    - by Organ Grinding Monkey
    I have been asked to work of a very large web application and deploy it. The problem that I'm facing here is that when I deploy the application and more that 1 user logs into the system, the sessions seem to cross over i.e: Person A logs in and works on the site, all good. When person B logs in, person A will then be logged in as person B as well. I have been asked to work of a very large web application and deploy it. The problem that I'm facing here is that when I deploy the application and more that 1 user logs into the system, the sessions seem to cross over i.e: Person A logs in and works on the site, all good. When person B logs in, person A will then be logged in as person B as well. If anyone has experienced this behaviour before and can steer me in the right direction, that would be first prize, Second prize would be to show me how I can debug this situation so that I can find out where the problem is and fix it. Some information about the application. From what I've been told and what I've seen within the app is that it started as a .Net 1.1 application and got upgraded to .Net 2 and that's why the log in system was done the way it is. (The application is huge and now complete and that's why I cant rewrite the whole user authentication process, it will just take to long and I don't know what effect it might have) All the Logged in User information is stored in properties that have been added in the Global.asax.vb file. (could this be the problem?) Any help here would be greatly appreciated

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >