Search Results

Search found 1580 results on 64 pages for 'scheme'.

Page 52/64 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • UIView Controller mysteriously getting deallocated

    - by Dan Ray
    I have a UINavigation scheme with a "welcome" page, a middle page, and a detail page. In the middle page, there's a segmented controller that can swap the main body of that page between a table, a calendar, and a MKMapView, each implemented with their own view controller classes. Today I implemented the MapView and its annotations and all of that. It's nice. And a detail disclosure on each callout takes you to the detail page just the same way as if you'd gotten there via the table. Lovely. I also have a right-button-bar button that pushes in a "search" view. From there you can search the data I'm navigating. When it's done filtering the data (an array of objects I'm keeping in a data singleton), it makes the table reload its data, and calls my annotation-clearer-and-builder methods on the map view, and then pops itself out, so the "middle" page (including whatever view was in the guts of it) is back on the screen. Problem is, if I go back and forth between the map and the search a couple times, any mention of the table view causes us to crash with: *** -[CALayer retain]: message sent to deallocated instance 0x710b810. (I obviously have NSZombies turned on.) I put an NSLog in the dealloc method of my table view controller. That thing's never getting called. I don't know if we're ditching it behind the scenes for memory purposes, or if I'm leaking it and can't get my hands back on it, or what. I'm sort of at a loss about where to look. Any hints?

    Read the article

  • What is the right way to make a new XMLHttpRequest from an RJS response in Ruby on Rails?

    - by Yuri Baranov
    I'm trying to come closer to a solution for the problem of my previous question. The scheme I would like to try is following: User requests an action from RoR controller. Action makes some database queries, makes some calculations, sets some session variable(s) and returns some RJS code as the response. This code could either update a progress bar and make another ajax request. display the final result (e.g. a chart grahic) if all the processing is finished The browser evaluates the javascript representation of the RJS. It may make another (recursive? Is recursion allowed at all?) request, or just display the result for the user. So, my question this time is: how can I embed a XMLHttpRequest call into rjs code properly? Some things I'd like to know are: Should I create a new thread to avoid stack overflow. What rails helpers (if any) should I use? Have anybody ever done something similar before on Rails or with other frameworks? Is my idea sane?

    Read the article

  • Self - hosted WCF server and SSL

    - by jitm
    Hello, There is self - hosted WCF server (Not IIS), and was generated certificates (on the Win Xp) using command line like makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureClient -sky exchange -pe makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureServer -sky exchange -pe These certificates was added to the server code like this: serviceCred.ServiceCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureServer"); serviceCred.ClientCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureClient"); After all previous operation I created simple client to check SSL connection to the server. Client configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IAdminContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://myhost:8002/Admin" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IAdminContract" contract="Admin.IAdminContract" name="BasicHttpBinding_IAdminContract" /> </client> </system.serviceModel> </configuration> Code: Admin.AdminContractClient client = new AdminContractClient("BasicHttpBinding_IAdminContract"); client.ClientCredentials.UserName.UserName = "user"; client.ClientCredentials.UserName.Password = "pass"; var result = client.ExecuteMethod() During execution I receiving next error: The provided URI scheme 'https' is invalid; expected 'http'.\r\nParameter name: via Question: How to enable ssl for self-hosted server and where should I set - up certificates for client and server ? Thanks.

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • Why is it assumed that send may return with less than requested data transmitted on a blocking socke

    - by Ernelli
    The standard method to send data on a stream socket has always been to call send with a chunk of data to write, check the return value to see if all data was sent and then keep calling send again until the whole message has been accepted. For example this is a simple example of a common scheme: int send_all(int sock, unsigned char *buffer, int len) { int nsent; while(len 0) { nsent = send(sock, buffer, len, 0); if(nsent == -1) // error return -1; buffer += nsent; len -= nsent; } return 0; // ok, all data sent } Even the BSD manpage mentions that ...If no messages space is available at the socket to hold the message to be transmitted, then send() normally blocks... Which indicates that we should assume that send may return without sending all data. Now I find this rather broken but even W. Richard Stevens assumes this in his standard reference book about network programming, not in the beginning chapters, but the more advanced examples uses his own writen (write all data) function instead of calling write. Now I consider this still to be more or less broken, since if send is not able to transmit all data or accept the data in the underlying buffer and the socket is blocking, then send should block and return when the whole send request has been accepted. I mean, in the code example above, what will happen if send returns with less data sent is that it will be called right again with a new request. What has changed since last call? At max a few hundred CPU cycles have passed so the buffer is still full. If send now accepts the data why could'nt it accept it before? Otherwise we will end upp with an inefficient loop where we are trying to send data on a socket that cannot accept data and keep trying, or else? So it seems like the workaround, if needed, results in heavily inefficient code and in those circumstances blocking sockets should be avoided at all an non blocking sockets together with select should be used instead.

    Read the article

  • How to refactor this database to allow sync. with smart clients?

    - by Anton Zvgny
    Howdy, I have inherited the following database: http://i46.tinypic.com/einvjr.png (EDIT: I cannot post images, so here goes the link) This currently runs 100% online thru an Asp.Net web app , but some employees will need to have offline access to this database to either get documents or to insert new documents for later synchronization when back to the office. What changes need to be made to this database scheme to support such scenario (sync smart clients with central web database)? Thanks! ps. I'm not an developer/dba, i'm just an mechanic engineer tasked with changing this app, so, take it easy on me :D ps1. We're using MSSQL Server for the online central database and MSSQL Compact for the smart client. ps2. the ImageGuid field of the images table is used to save the images to the file system. The web app takes the guid and create folders according to the first 3 initial letters to persist the image to the file system. ps3. The users of the smart client not always get server data first and then go modifying to sync later the changes. Most of the time they start working completely offline (with a blank local database) and later sync the data to the server. ps4. All the users of the smart client app have an account at the web app.

    Read the article

  • Which languages support *recursive* function literals / anonymous functions?

    - by Hugh Allen
    It seems quite a few mainstream languages support function literals these days. They are also called anonymous functions, but I don't care if they have a name. The important thing is that a function literal is an expression which yields a function which hasn't already been defined elsewhere, so for example in C, &printf doesn't count. EDIT to add: if you have a genuine function literal expression <exp>, you should be able to pass it to a function f(<exp>) or immediately apply it to an argument, ie. <exp>(5). I'm curious which languages let you write function literals which are recursive. Wikipedia's "anonymous recursion" article doesn't give any programming examples. Let's use the recursive factorial function as the example. Here are the ones I know: JavaScript / ECMAScript can do it with callee: function(n){if (n<2) {return 1;} else {return n * arguments.callee(n-1);}} it's easy in languages with letrec, eg Haskell (which calls it let): let fac x = if x<2 then 1 else fac (x-1) * x in fac and there are equivalents in Lisp and Scheme. Note that the binding of fac is local to the expression, so the whole expression is in fact an anonymous function. Are there any others?

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • Authenticated WCF: Getting the Current Security Context

    - by bradhe
    I have the following scenario: I have various user's data stored in my database. This data was entered via a web app. We'd like to expose this data back to the user over a web service so that they can integrate their data with their applications. We would also like to expose some business logic over these services. As such we do not want to use OData. This is a multi-tenant application so I only want to expose their data back to them and not other users. Likewise, the business logic we expose should be relative to the authenticated user. I would like let the user use an OASIS scheme to authenticate with the web service -- WCF already allows for this out of the box as far as I understand -- or perhaps we can issue them certificates to authenticate with. That bit hasn't really been worked out yet. Here is a bit of pseudo-code of how I envision this would work within the service: function GetUsersData(id) var user := Lookup User based on Username from Auth Context var data := Get Data From Repository based on "user" return data end function For the business logic scenario I think it would look something like this: function PerformBusinessLogic(someData) var user := Lookup User based on Username from Auth Context var returnValue := Perform some logic based on supplied data return returnValue end function The hard bit here is getting the current username (or cert info in the cert scenario) that the user authenticated with! Does WCF even enable this scenario? If not would WSE3 enable this? Thanks,

    Read the article

  • Setup SSL (self signed cert) with tomcat

    - by Danny
    I am mostly following this page: http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html I used this command to create the keystore keytool -genkey -alias tomcat -keyalg RSA -keystore /etc/tomcat6/keystore and answered the prompts Then i edited my server.xml file and uncommented/edited this line <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/etc/tomcat6/keystore" keystorePass="tomcat" /> then I go to the web.xml file for my project and add this into the file <security-constraint> <web-resource-collection> <web-resource-name>Security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> When I try to run my webapp I am met with this: Unable to connect Firefox can't establish a connection to the server at localhost:8443. * The site could be temporarily unavailable or too busy. Try again in a few moments. * If you are unable to load any pages, check your computer's network connection. If I comment out the lines I've added to my web.xml file, the webapp works fine. My log file in /var/lib/tomcat6/logs says nothing. I can't figure out if this is a problem with my keystore file, my server.xml file or my web.xml file.... Any assistance is appreciated I am using tomcat 6 on ubuntu.

    Read the article

  • WCF Discovery finds endpoint but host is "localhost"

    - by Flo
    I am trying to use the Discovery feature in WCF using http://msdn.microsoft.com/en-us/library/dd456783(v=VS.100).aspx as a starting point. It works fine on my machine, but then I wanted to run the service on a different machine. The service was discovered properly but the hostname of the found service is always "localhost" which is of course not much use. Service Endpoint: var endpointAddress = new EndpointAddress(new UriBuilder { Scheme = Uri.UriSchemeNetTcp, Port = port}.Uri); var endpoint = new ServiceEndpoint(ContractDescription.GetContract(typeof(IServiceInterface)), new NetTcpBinding (), endpointAddress); Client: static EndpointAddress FindServiceAddress<T>() { Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); DiscoveryClient discoveryClient = new DiscoveryClient(new UdpDiscoveryEndpoint()); // Find endpoints FindResponse findResponse = discoveryClient.Find(new FindCriteria(typeof(T))); Console.WriteLine(string.Format("Searched for {0} seconds. Found {1} Endpoint(s).",stopwatch.ElapsedMilliseconds / 1000,findResponse.Endpoints.Count)); if (findResponse.Endpoints.Count > 0) { return findResponse.Endpoints[0].Address; } return null; } Should I simply set the Host to System.Environment.MachineName?

    Read the article

  • Designing secure consumer blackberry application

    - by Kiran Kuppa
    I am evaluating a requirement for a consumer blackberry application that places high premium on security of user's data. Seems like it is an insurance company. Here are my ideas on how I could go about it. I am sure this would be useful for others who are looking for similar stuff Force the user to use device password. (I am guessing that this would be possible - though not checked it yet). Application can request notifications when the device is about to be locked and just after it has been unlocked. Encryption of application specific data can be managed at those times. Application data would be encrypted with user's password. User's credentials would be encrypted with device password. Remote backup of the data could be done over HTTPS (any better ideas are appreciated) Questions: What if the user forgets his device password. If the user forgets his application password, what is the best and secure way to reset the password? If the user losses the phone, remote backup must be done and the application data must be cleaned up. I have some ideas on how to achieve (3) and shall share them. There must be an off-line verification of the user's identity and the administrator must provide a channel using which the user must be able to send command to the device to perform the wiping of application data. The idea is that the user is ALWAYS in control of his data. Without the user's consent, even the admin must not be able to do activities such as cleaning up the data. In the above scheme of things, it appears as if the user's password need not be sent over the air to server. Am I correct? Thanks, --Kiran Kumar

    Read the article

  • How to verify the SSL connection when calling an URI?

    - by robertokl
    Hello, I am developing an web application that is authenticated using CAS (A single-sign-on solution: http://www.ja-sig.org/wiki/display/CAS/Home). For security reasons, I need two things to work: The communication between CAS and my application needs to be secure My application needs to accept the certification coming with CAS, so that I can guarantee that the CAS responding is the real CAS Server. This is what I got so far: uri = URI.parse("https://www.google.com/accounts") https = Net::HTTP.new(uri.host, uri.port) https.use_ssl = (uri.scheme == 'https') https.verify_mode = (OpenSSL::SSL::VERIFY_PEER) raw_res = https.start do |conn| conn.get("#{uri.path}?#{uri.query}") end This works just great in my Mac OSX. When I try to reach an insecure uri, it raises an exception, and when I try to reach a secure uri, it allow me normally, just like expected. The problem starts when I deploy my application on my Linux server. I tried in both Ubuntu and Red Hat. Independing of what uri I try to reach, it always raises me this exception: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed from /usr/local/lib/ruby/1.8/net/http.rb:586:in `connect' from /usr/local/lib/ruby/1.8/net/http.rb:586:in `connect' from /usr/local/lib/ruby/1.8/net/http.rb:553:in `do_start' from /usr/local/lib/ruby/1.8/net/http.rb:542:in `start' from (irb):7 I think this have something to do with my installed OpenSSL package, but I can't be sure. This are my installed OpenSSL packages: openssl.x86_64 0.9.8e-12.el5 installed openssl-devel.x86_64 0.9.8e-12.el5 installed I tried using HTTParty as well, but it just ignores the SSL certificated. I hope someone can help me, either by telling me a gem that works the way I need. Thanks.

    Read the article

  • Analyzing Windows crash dumps generated on XP/32 machines with Win7/64 ?

    - by Martin
    We have a problem with analyzing our Windows crash-dumps that were created on customer Windows XP/32 boxes on our development machines. Many of our development machines are now Win7/64 boxes, but it appears that the crash-dumps generated under Windows XP cannot full resolve their binary dependency, thereby leading to warnings when displaying the call stacks in Visual Studio (2005). For example, the msvcr80.dll cannot be resolved when loaded from a Win7 machine when the dump was generated on Windows XP: On XP, the WinSxS path appears to be C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll -- on Win7, the WinSxS path to the same DLL version seems to be: x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d08d7da0442a985d (I got this info from a forum thread on codeguru that link to an msdn article.) Visual Studio (2005) can now no longer correctly resolve the binaries for the crash-dump. How can I get Visual Studio to resolve all the correct binaries for my dump file? Note: I have already correctly set up the symbol server. The public symbols for most system DLLs (kernel32.dll, etc) and our symbols of our own DLLs are correctly loaded. It is just that the symbols of DLLs that reside in the WinSxS folder are not loaded, because it appears that Vista/7 uses a different path scheme for these DLLs than XP does and therefore Visual Studio cannot find the dll (not the pdb) on the local dev machine and so cannot load the corresponding symbols for the dump file.

    Read the article

  • Google Chrome Extension - Help needed

    - by Jim-Y
    Im new on Google Chrome Extensions coding, and i have some basic questions. I want to make a Chrome Extension, and the scheme is the following: -a popup window, containing buttons and result fields (popup.html) -when a button is clicked, i want to trigger an event, this event should connect to a webserver (i make the servlet too), and gather information from the server. (XMLHttpRequest()) -after that, i want my extension to load the gathered information into one of the result fields. Simple, isn't it? But i have several problems, right at the beginning:( I started developing with reading tutorials, but i have fog on the main structure of an extension. Now, i started an app, containing a popup.html, manifest.json ... In popup.html theres a result field, and a button <div id="extension_container"> <div id="header"> <p id="intro">Result here</p> <button type="button" id="button">Click Me!</button> </div> <!-- END header --> <div id="content"> </div> <!-- END content --> When button is clicked, i trigger an event, handeled with jquery, code here: <script> $(document).ready(function(){ $("#button").click(function(){ $("#intro").text("Hello, im added"); alert("Clicked"); }); }); </script> And here comes the problem, in popup.html this doesnt work, if i load it to Chrome, nothing happens. Otherwise, if i open popup.html in browser, not as an extension, everything works fine. So, i think i have basic misunderstandings on extension structures, starting with background pages, background javascript and so on.. :( Could anyone help me?

    Read the article

  • Tomcat 4.1.31 - HTTPS not working intermittently | "Page Cannot be Displayed" problems

    - by cedar715
    We are facing this error intermittently. If we restart the server it works for some time and again the problem start. We also have another load balanced server with similar configuration and that is working fine. The server is running on Linux box. If we do the "ps -ef" its listing the TOMCAT process. URL : https://xyz.abc.com:9234/axis/servlet/AxisServlet Following the configuration in server.xml file. <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="9234" minProcessors="5" maxProcessors="75" enableLookups="true" acceptCount="100" debug="0" scheme="https" secure="true" useURIValidationHack="false" disableUploadTimeout="true"> <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory" clientAuth="false" protocol="TLS" /> </Connector> Is it the problem with our load balancer which is forwarding most requests to this server? Is it any way related to the "maxProcessors" or "acceptCount" attributes defined in the above configuration? Is it the problem with the port number?? Does it have to do any thing with the certificate. The certificate is generated using Java Keytool. ( However, the other load balanced server is also using the same certificate and working fine) Please suggest in resolving this issue. thanks

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • Windows Azure ASP.NET MVC Role behaves strangely when redirecting from HTTP to HTTPS

    - by Rinat Abdullin
    Subj. I've got an ASP.NET 2 MVC Worker Role Application, that does not differ much from the default template. When attempting redirect from HTTP to HTTPS (this happens when we access constroller secured by the usual RequireSSL attribute implementation) we get blank page with "Bad Request" message. IntelliTrace shows this: Thrown: "The file '/Views/Home/LogOnUserControl.aspx' does not exist." (System.Web.HttpException) Call stack is really short: [External Code] App_Web_vfahw7gz.dll!ASP.views_shared_site_master.__Render__control1(System.Web.UI.HtmlTextWriter __w = {unknown}, System.Web.UI.Control parameterContainer = {unknown}) [External Code] App_Web_bsbqxr44.dll!ASP.views_home_index_aspx.ProcessRequest(System.Web.HttpContext context = {unknown}) [External Code] User control reference is the usual one in /Views/Shared/Site.Master: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> And partial view LogOnUserControl.ashx is located in Views/Shared (and it is ASHX, not ASPX). Problem shows up, when we try to access site pages, that require auth and redirect. These pages are secured by RequireSSL attribute (Redirect == true): [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)] public sealed class RequireSslAttribute : FilterAttribute, IAuthorizationFilter { public bool Redirect { get; set; } // Methods public void OnAuthorization(AuthorizationContext filterContext) { // this get's messy, when we are running custom ports // within the local dev fabric. // hence we disable code in the debug #if !DEBUG if (filterContext == null) { throw new ArgumentNullException("filterContext"); } if (filterContext.HttpContext.Request.IsSecureConnection) return; var canRedirect = string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase); if (canRedirect && Redirect) { var builder = new UriBuilder { Scheme = "https", Host = filterContext.HttpContext.Request.Url.Host, Path = filterContext.HttpContext.Request.RawUrl }; filterContext.Result = new RedirectResult(builder.ToString()); } else { throw new HttpException(0x193, "Access forbidden. The requested resource requires an SSL connection."); } #endif } } Obviously we compile in RELEASE for this case. Does anybody have any idea, what could cause this strange exception and how to get rid of it?

    Read the article

  • How to verify the SSL connection when calling a URI?

    - by robertokl
    Hello, I am developing a web application that is authenticated using CAS (A single-sign-on solution: http://www.ja-sig.org/wiki/display/CAS/Home). For security reasons, I need two things to work: The communication between CAS and my application needs to be secure My application needs to accept the certification coming from CAS, so that I can guarantee that the CAS responding is the real CAS server This is what I got so far: uri = URI.parse("https://www.google.com/accounts") https = Net::HTTP.new(uri.host, uri.port) https.use_ssl = (uri.scheme == 'https') https.verify_mode = (OpenSSL::SSL::VERIFY_PEER) raw_res = https.start do |conn| conn.get("#{uri.path}?#{uri.query}") end This works just great in Mac OS X. When I try to reach an insecure URI, it raises an exception, and when I try to reach a secure URI, it allows me normally, just like expected. The problem starts when I deploy my application on my Linux server. I tried in both Ubuntu and Red Hat. Independent of what URI I try to reach, it always raises this exception: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed from /usr/local/lib/ruby/1.8/net/http.rb:586:in `connect' from /usr/local/lib/ruby/1.8/net/http.rb:586:in `connect' from /usr/local/lib/ruby/1.8/net/http.rb:553:in `do_start' from /usr/local/lib/ruby/1.8/net/http.rb:542:in `start' from (irb):7 I think this have something to do with my installed OpenSSL package, but I can't be sure. This are my installed OpenSSL packages: openssl.x86_64 0.9.8e-12.el5 installed openssl-devel.x86_64 0.9.8e-12.el5 installed I tried using HTTParty as well, but it just ignores the SSL certificate. I hope someone can help me, or tell me about a gem that works the way I need. Thanks.

    Read the article

  • Running a GWT application (including Applets) inside an IFRAME from an ASP.NET 3.5 app?

    - by Jay Stevens
    We are looking at integrating a full-blown GWT (Google Web Toolkit 2.0) application with an existing ASP.NET 3.5 application. My first gut reaction is that this is a horrible frankenstein idea. However, the customer has insisted that we use this application developed by a third-party. I have almost NO CONTROL over the development of the GWT app. My first thought is to actually attempt to embed this in an iFrame. Because GWT is running under Tomcat/Jakarta, it is hosted on a different server from the .NET app so the iFrame src will be to a URL on the other machine. I need to utilize our own ASP.NET authorization scheme to restrict access to the embedded GWT application. The GWT app also uses embedded java applets, which don't seem to be working right now inside the iframe. The GWT app makes calls to a backend server (using GWT-RPC?). Any major problems with this approach that anyone can see? Will GWT work on an iframe while hosted on a different machine? NOTE: SIMPLY ADDING A DIV WITH THE SAME NAME DOES NOT WORK FOR THIS!

    Read the article

  • self referencing object in JPA

    - by geoaxis
    Hello, I am trying to save a SystemUser entity in JPA. I also want to save certain things like who created the SystemUser and who last modified the system User as well. @ManyToOne(targetEntity = SystemUser.class) @JoinColumn private SystemUser userWhoCreated; @Temporal(TemporalType.TIMESTAMP) @DateTimeFormat(iso=ISO.DATE_TIME) private Date timeCreated; @ManyToOne(targetEntity = SystemUser.class) @JoinColumn private SystemUser userWhoLastModified; @Temporal(TemporalType.TIMESTAMP) @DateTimeFormat(iso=ISO.DATE_TIME) private Date timeLastModified; I also want to ensure that these values are not null when persisted. So If I use the NotNull JPA annotation, that is easily solved (along with reference to another entity) The problem description is simple, I cannot save rootuser without having rootuser in the system if I am to use a DataLoader class to persist JPA entity. Every other later user can be easily persisted with userWhoModified as the "systemuser" , but systemuser it's self cannot be added in this scheme. Is there a way so persist this first system user (I am thinking with SQL). This is a typical bootstrap (chicken or the egg) problem i suppose.

    Read the article

  • Is it a jaxb bug?

    - by wd-shuang
    I take a scheme, its element definition as follows: <xs:complexType name="OriginalMessageContents1"> <xs:sequence> <xs:any namespace="##any" processContents="skip"/> <xs:any namespace="##any" processContents="skip" minOccurs="0"/> </xs:sequence> </xs:complexType> I use xjb to export java file,xjb as follow: <jxb:bindings version="2.0" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <jxb:bindings schemaLocation="ibps.706.001.01.xsd" node="/xs:schema"> <jxb:bindings node="//xs:complexType[@name='OriginalMessageContents1']/xs:sequence"> <jxb:bindings node=".//xs:any[position()=1]"> <jxb:property name="anyOne"/> </jxb:bindings> <jxb:bindings node=".//xs:any[position()=2]"> <jxb:property name="anyTwo"/> </jxb:bindings> </jxb:bindings> </jxb:bindings> </jxb:bindings> Java as: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "OriginalMessageContents1", propOrder = { "anyOne", "anyTwo" }) public class OriginalMessageContents1 { @XmlAnyElement protected Element anyOne; @XmlAnyElement protected Element anyTwo; public Element getAnyOne() { return anyOne; } public void setAnyOne(Element value) { this.anyOne = value; } public Element getAnyTwo() { return anyTwo; } public void setAnyTwo(Element value) { this.anyTwo = value; } } When I unmashaller this object by using getAnyOne,I get the contents of the second element. I got null by using getAnyTwo. It is a bug? Anybody can help me ?

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • are projects with high developer turn over rate really a bad thing?

    - by John
    I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mozaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad. What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll just find another developer." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "If you think it's a bad idea to build this, I'll just find another (possibly cheaper) developer to do what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for the developers that go AWOL 3-4 months after working with poor code, strict timelines, and little feedback. So my question is the following: Are the following symptoms of a project really such a bad thing for business? high developer turn over rate poorly built technology - often a patchwork of different and inappropriately used architectural styles owners without a clear roadmap for their web project, and they request features on a whim I've seen numerous businesses prosper while experiencing the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I'm forced to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >