Search Results

Search found 9658 results on 387 pages for 'authentication provider'.

Page 363/387 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • CSS rules not getting applied in JSPs ?

    - by lakshmanan
    Im building a web application using struts2. I have an authentication interceptor at the top of all the interceptors (I have set it up as such) which checks whether a user is logged in. If not, it will give a login page as result. In login page, I have some set of standard CSS rules which I use for all the JSPs in the app. But that login page coming via that interceptor is not getting applied with CSS rules. Don't know why. Here is my JSP login page. <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <%@ taglib prefix="s" uri="/struts-tags" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <link rel="stylesheet" href="styles/styles.css" type="text/css" /> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Projit Login </title> </head> <body> <div id="header"> <h1>Projit</h1> </div> <div id="col1"> <s:form action="/login.action"> <s:textfield name="some field" label="Enter somefield" /> <s:textfield name="username" label="Username" /> <s:password name="password" label="Password" /> <s:submit value="Sign In" /> </s:form> </div> <div id="col2"> <h3>Login</h3> </div> <div id="footer"> some footer </div> </body> </html> the css file is present in WebContent/styles/styles.css I am using eclipse for development. this problem of CSS rules not getting applied occurs at times for other jsps as well inspite of the CSS file being present in the correct location. I guess there must be some URL thing behind this problem. I have tried hard but it is not working till now. Please help.

    Read the article

  • How to get hibernate3-maven-plugin hbm2ddl to find JDBC driver?

    - by HDave
    I have a Java project I am building with Maven. I am now trying to get the hibernate3-maven-plugin to run the hbm2ddl tool to generate a schema.sql file I can use to create the database schema from my annotated domain classes. This is a JPA application that uses Hibernate as the provider. In my persistence.xml file I call out the mysql driver: <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/> <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/> When I run Maven, I see it processing all my classes, but when it goes to output the schema, I get the following error: ERROR org.hibernate.connection.DriverManagerConnectionProvider - JDBC Driver class not found: com.mysql.jdbc.Driver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver I have the MySQL driver as a dependency of this module. However it seems like the hbm2ddl tool cannot find it. I would have guessed that the Maven plugin would have known to search the local Maven file repository for this driver. What gives? The relevant part of my pom.xml is this: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <persistenceunit>my-unit</persistenceunit> </componentProperties> </configuration> </plugin>

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • Reporting Services as PDF through WebRequest in C# 3.5 "Not Supported File Type"

    - by Heath Allison
    I've inherited a legacy application that is supposed to grab an on the fly pdf from a reporting services server. Everything works fine up until the point where you try to open the pdf being returned and adobe acrobat tells you: Adobe Reader could not open 'thisStoopidReport'.pdf' because it is either not a supported file type or because the file has been damaged(for example, it was sent as an email attachment and wasn't correctly decoded). I've done some initial troubleshooting on this. If I replace the url in the WebRequest.Create() call with a valid pdf file on my local machine ie: @"C:temp/validpdf.pdf") then I get a valid PDF. The report itself seems to work fine. If I manually type the URL to the reporting services report that should generate the pdf file I am prompted for user authentication. But after supplying it I get a valid pdf file. I've replace the actual url,username,userpass and domain strings in the code below with bogus values for obvious reasons. WebRequest request = WebRequest.Create(@"http://x.x.x.x/reportServer?/reports/reportNam&rs:format=pdf&rs:command=render&rc:parameters=blahblahblah"); int totalSize = 0; request.Credentials = new NetworkCredential("validUser", "validPass", "validDomain"); request.Timeout = 360000; // 6 minutes in milliseconds. request.Method = WebRequestMethods.Http.Post; request.ContentLength = 0; WebResponse response = request.GetResponse(); Response.Clear(); BinaryReader reader = new BinaryReader(response.GetResponseStream()); Byte[] buffer = new byte[2048]; int count = reader.Read(buffer, 0, 2048); while (count > 0) { totalSize += count; Response.OutputStream.Write(buffer, 0, count); count = reader.Read(buffer, 0, 2048); } Response.ContentType = "application/pdf"; Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Expires = 30; Response.AddHeader("Content-Disposition", "attachment; filename=thisStoopidReport.pdf"); Response.AddHeader("Content-Length", totalSize.ToString()); reader.Close(); Response.Flush(); Response.End();

    Read the article

  • .NET 2.0 Process Elevation for App Installation

    - by Brian Gillespie
    We have an application written in both C++ and .NET that installs for all users in the Program Files folder. This application downloads new versions of itself (as MSI installers) and spawns the new installer process to replace itself. The install process as it exists today: Copy an install manager app (C#, .NET 2.0) to the temp directory. Call this 'Manager' Manager is executed with elevated privs per this article. The original application exits. Manager spawns the MSI installer (with elevated privs, since the copy is elevated) Manager spawns the new version of the app. The bug: The newly installed app is running in an elevated state. This causes problems I won't enumerate here. Ideally, the launch of the newly installed app would be run with the permissions of the original user. I can't figure out how to demote the app back to being the standard user after elevation. An inelegant hack: (yeah, yeah, this whole process is inelegant anyway) Copy the install manager to the temp directory Run the install manager with standard user privs. Lets call this instance 'LowlyManager'. Original application exits. LowlyManager spawns the app again, this time with elevated privs. Let's name this instance 'UpperManagement' UpperManagement spawns the installer UpperManagement exits gracefully, returning the exit code of the installer. LowlyManager interprets the error code from UpperManagement, and spawns the newly installed application. This time as the original invoker. Is there a better way to do this? (I've left out a bunch of other details before and after these steps that make the process smoother for the user, but this should be enough to understand the core of the problem I'm trying to solve.) Other requirements: We can't install as a per-user app The user shouldn't be presented with an authentication dialog box if UAC would have simply asked "are you sure you want to allow this?". I think this might kill a solution using WindowsImpersonationContext, but I'm not sure. The system needs to work on XP, Vista, and Windows 7 (even if there is a separate process for XP).

    Read the article

  • No value given for one or more required parameters in connection initialisation

    - by DarkJaff
    Hi everyone, I have an C# form application that use an access database. This application works perfectly in debug and release. It works on all version of Windows. But it crash on one computer with Windows 7. The message I got is: System.Data.OleDb.OleDbException: No value given for one or more required parameters. The function that is supposely not working is this: public void InitConnection(string strFile) { string strConnection = String.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0};User Id=admin;Password=;", strFile); m_conn = new OleDbConnection(strConnection); try { //On vérifie si la connexion n'est pas ouverte if (m_conn.State != ConnectionState.Open) { m_conn.Open(); m_VCoeffModele = GetModeleCoeff(); } } catch (Exception err) { throw err; } } I think it's something related to the connection string but why only on that computer. Thanks for your help! DarkJaff EDIT Here is the complete error message: See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ***** Exception Text ******* System.Data.OleDb.OleDbException: No value given for one or more required parameters. at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at DatabaseLayer.DatabaseFacade.GetModeleCoeff() at DatabaseLayer.DatabaseFacade.InitConnection(String strFile) at CalculatriceCHW.ListeMesure.OuvrirFichier(String strFichier) at CalculatriceCHW.ListeMesure.nouveauFichierMenu_Click(Object sender, EventArgs e) at System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e) at System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e) at System.Windows.Forms.ToolStripItem.FireEventInteractive(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStripItem.FireEvent(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.ToolStripDropDown.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ToolStrip.WndProc(Message& m) at System.Windows.Forms.ToolStripDropDown.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • The remote server returned an error: NotFound.

    - by xscape
    Hi, I'm trying to retrieve a string in my old webservice but it give me an error of The remote server returned an error: NotFound. and its InnerException is {System.Net.WebException: The remote server returned an error: NotFound. --- System.Net.WebException: The remote server returned an error: NotFound. at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult) at System.Net.Browser.BrowserHttpWebRequest.<c_DisplayClass5.b_4(Object sendState) at System.Net.Browser.AsyncHelper.<c_DisplayClass2.b_0(Object sendState) --- End of inner exception stack trace --- at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)} this is the method which error prompted, this method returns a string format void client_ValidateUserEncryptedCompleted(object sender, DummyWS.ValidateUserEncryptedCompletedEventArgs e) { object token = e.Result; client = new DummyWS.MachineHistoryWSSoapClient(); if (token != null) { client.GetSummaryXMLAsync(token, "", ""); } } I am currently using Silverlight 4.0 and my ServiceReferences.ClientConfig is <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="MachineHistoryWSSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost/MHVwsModified/MachineHistoryWS.asmx" binding="basicHttpBinding" bindingConfiguration="MachineHistoryWSSoap" contract="DummyWS.MachineHistoryWSSoap" name="MachineHistoryWSSoap" /> </client> </system.serviceModel> My Web.Config in my web service is <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /></assemblies></compilation> <authentication mode="Windows" /> </system.web> <system.webServer> <directoryBrowse enabled="true" /> </system.webServer> Any help will be aprreciated thank you.

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • java.sql.SQLWarning: [Microsoft][SQLServer 2000 Driver for JDBC]Database changed to X

    - by adisembiring
    Hi all, I'm using Hibernate 3.2.1 and database SQLServer2000 while I'm try to insert some data using my dao, some warning occurred like this: java.sql.SQLWarning: [Microsoft][SQLServer 2000 Driver for JDBC]Database changed to BTN_SPP_DB at com.microsoft.jdbc.base.BaseWarnings.createSQLWarning(Unknown Source) at com.microsoft.jdbc.base.BaseWarnings.get(Unknown Source) at com.microsoft.jdbc.base.BaseConnection.getWarnings(Unknown Source) at org.hibernate.util.JDBCExceptionReporter.logAndClearWarnings(JDBCExceptionReporter.java:22) at org.hibernate.jdbc.ConnectionManager.closeConnection(ConnectionManager.java:443) at org.hibernate.jdbc.ConnectionManager.aggressiveRelease(ConnectionManager.java:400) at org.hibernate.jdbc.ConnectionManager.afterTransaction(ConnectionManager.java:287) at org.hibernate.jdbc.JDBCContext.afterTransactionCompletion(JDBCContext.java:221) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:119) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:43) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:1) at co.id.hanoman.btnmw.spp.dao.test.TagihanDaoTest.testSave(TagihanDaoTest.java:81) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99) at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75) at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45) at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66) at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35) at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) my hibernate initialization log is: 2010-04-26 22:54:05,203 INFO [Version] Hibernate Annotations 3.3.0.GA 2010-04-26 22:54:05,234 INFO [Environment] Hibernate 3.2.1 2010-04-26 22:54:05,234 INFO [Environment] hibernate.properties not found 2010-04-26 22:54:05,234 INFO [Environment] Bytecode provider name : cglib 2010-04-26 22:54:05,234 INFO [Environment] using JDK 1.4 java.sql.Timestamp handling 2010-04-26 22:54:05,343 INFO [Configuration] configuring from resource: /hibernate.cfg.xml 2010-04-26 22:54:05,343 INFO [Configuration] Configuration resource: /hibernate.cfg.xml 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] trying to resolve system-id [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] recognized hibernate namespace; attempting to resolve on classpath under org/hibernate/ 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] located [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] in classpath 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.dialect=org.hibernate.dialect.SQLServerDialect 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.driver_class=com.microsoft.jdbc.sqlserver.SQLServerDriver 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.url=jdbc:microsoft:sqlserver://12.56.11.65:1433;databaseName=BTN_SPP_DB 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.username=spp 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.password=spp

    Read the article

  • Set maven to use archiva repositories WITHOUT using activeByDefault?

    - by Sam Levin
    I am very close to finally having a working setup with archiva and maven. The last thing that's really boggling me, is how to set up my internal and snapshot repositories - without using a profile which contains activeByDefault set to true. I am using a SUPER super pom - a company-wide pom which contains distributionManagement information for releases. I was thinking that I could specify the repositories in this pom, and configure the authentication settings in settings.xml? Can I use repositories tag without a profile? There should be no "profile" for my internal and snapshot repositories, as they will never change... What I'm trying to steer clear from, is using a "default" profile, which is active all the time. I hear activeByDefault is NOT a best practice and I don't intend to use it. With that said, how should I go about doing this? My internal repo is a mirror of the maven central repo, so I would like to lock down my developers to ONLY use our internal artifact server. Remember - I do NOT want a profile with activeByDefault set to true. I cannot stress this enough! Should I use Maven mirrors? Should I "add" additional repositories? If I take the repositories tag instead of the mirrors tag, will maven force builds to use ONLY my archiva settings, instead of the default maven central? Or is what I seek to accomplish able to be done using only the mirrors tag in maven? I know how to configure repo credentials when using repositories tag, but not with mirrors. How is this done? Is providing credentials for anything in mirrors tags the same as for anything in repositories tags? Am I missing something obvious? I've had it up to here with getting things up and running using maven. I know it will be worthwhile in the end, but it is surely causing me a ton of aggravation and resources seem to be sparse. Either that, or people are content using it however they please without regard to best-practices. Thank you

    Read the article

  • Logging in with WebFinger and OpenID

    - by Ryan
    I would like to apologize in advance for the ugly formatting. In order to talk about the problem, I need to be posting a bunch of URLs, but the excessive URLs and my lack of reputation makes StackOverflow think I could be a spammer. Any instance of 'ht~tp' is supposed to be 'http'. '{dot}' is supposed to be '.' and '{colon}' is supposed to be ':'. Also, my lack of reputation has prevented me from tagging my question with 'webfinger' and 'google-profiles'. Onto my question: I am messing around with WebFinger and trying to create a small rails app that enables a user to log in using nothing but their WebFinger account. I can succesfully finger myself, and I get back an XRD file with the following snippet: Link rel="ht~tp://specs{dot}openid{dot}net/auth/2.0/provider" href="ht~tp://www{dot}google{dot}com/profiles/{redacted}"/ Which, to me, reads, "I have an OpenID 2.0 login at the url: ht~tp://www{dot}google{dot}com/profiles/{redacted}". But when I try to use that URL to log in, I get the following error OpenID::DiscoveryFailure (Failed to fetch identity URL ht~tp://www{dot}google{dot}com/profiles/{redacted} : Error encountered in redirect from ht~tp://www{dot}google{dot}com/profiles/{redacted}: Error fetching /profiles/{Redacted}: Connection refused - connect(2)): When I replace the profile URL with 'ht~tps://www{dot}google{dot}com/accounts/o8/id', the login works perfectly. here is the code that I am using (I'm using RedFinger as a plugin, and JanRain's ruby-openid, installed without the gem) require "openid" require 'openid/store/filesystem.rb' class SessionsController < ApplicationController def new @session = Session.new #render a textbox requesting a webfinger address, and a submit button end def create ####################### # # Pay Attention to this section right here # ####################### #use given webfinger address to retrieve openid login finger = Redfinger.finger(params[:session][:webfinger_address]) openid_url = finger.open_id.first.to_s #openid_url is now: ht~tp://www{dot}google{dot}com/profiles/{redacted} #Get needed info about the acquired OpenID login file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.begin(openid_url) #ERROR HAPPENS HERE #send user to OpenID login for verification redirect_to response.redirect_url('ht~tp://localhost{colon}3000/','ht~tp://localhost{colon}3000/sessions/complete') end def complete #interpret return parameters file_store = OpenID::Store::Filesystem.new("./noncedir/") consumer = OpenID::Consumer.new(session,file_store) response = consumer.complete params case response.status when OpenID::SUCCESS session[:openid] = response.identity_url #redirect somehwere here end end end Is it possible for me to use the URL I received from my WebFinger to log in with OpenID?

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • Can't read excel file after creating it using File.WriteAllText() function

    - by Srikanth Mattihalli
    public void ExportDataSetToExcel(DataTable dt) { HttpResponse response = HttpContext.Current.Response; response.Clear(); response.Charset = "utf-8"; response.ContentEncoding = Encoding.GetEncoding("utf-8"); response.ContentType = "application/vnd.ms-excel"; Random Rand = new Random(); int iNum = Rand.Next(10000, 99999); string extension = ".xls"; string filenamepath = AppDomain.CurrentDomain.BaseDirectory + "graphs\\" + iNum + ".xls"; string file_path = "graphs/" + iNum + extension; response.AddHeader("Content-Disposition", "attachment;filename=\"" + iNum + "\""); string query = "insert into graphtable(graphtitle,graphpath,creategraph,year) VALUES('" + iNum.ToString() + "','" + file_path + "','" + true + "','" + DateTime.Now.Year.ToString() + "')"; try { int n = connect.UpdateDb(query); if (n > 0) { resultLabel.Text = "Merge Successfull"; } else { resultLabel.Text = " Merge Failed"; } resultLabel.Visible = true; } catch { } using (StringWriter sw = new StringWriter()) { using (HtmlTextWriter htw = new HtmlTextWriter(sw)) { // instantiate a datagrid DataGrid dg = new DataGrid(); dg.DataSource = dt; //ds.Tables[0]; dg.DataBind(); dg.RenderControl(htw); File.WriteAllText(filenamepath, sw.ToString()); // File.WriteAllText(filenamepath, sw.ToString(), Encoding.UTF8); response.Write(sw.ToString()); response.End(); } } } Hi all, I have created an excel sheet from datatable using above function. I want to read the excel sheet programatically using the below connectionstring. This string works fine for all other excel sheets but not for the one i created using the above function. I guess it is because of excel version problem. OleDbConnection conn= new OleDbConnection("Data Source='" + path +"';provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=Excel 8.0;";); Can anyone suggest a way by which i can create an excel sheet such that it is readable again using above query. I cannot use Microsoft InterOp library as it is not supported by my host.

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • Silverlight with MVVM Inheritance: ModelView and View matching the Model

    - by moonground.de
    Hello Stackoverflowers! :) Today I have a special question on Silverlight (4 RC) MVVM and inheritance concepts and looking for a best practice solution... I think that i understand the basic idea and concepts behind MVVM. My Model doesn't know anything about the ViewModel as the ViewModel itself doesn't know about the View. The ViewModel knows the Model and the Views know the ViewModels. Imagine the following basic (example) scenario (I'm trying to keep anything short and simple): My Model contains a ProductBase class with a few basic properties, a SimpleProduct : ProductBase adding a few more Properties and ExtendedProduct : ProductBase adding another properties. According to this Model I have several ViewModels, most essential SimpleProductViewModel : ViewModelBase and ExtendedProductViewModel : ViewModelBase. Last but not least, according Views SimpleProductView and ExtendedProductView. In future, I might add many product types (and matching Views + VMs). 1. How do i know which ViewModel to create when receiving a Model collection? After calling my data provider method, it will finally end up having a List<ProductBase>. It containts, for example, one SimpleProduct and two ExtendedProducts. How can I transform the results to an ObservableCollection<ViewModelBase> having the proper ViewModel types (one SimpleProductViewModel and two ExtendedProductViewModels) in it? I might check for Model type and construct the ViewModel accordingly, i.e. foreach(ProductBase currentProductBase in resultList) if (currentProductBase is SimpleProduct) viewModels.Add( new SimpleProductViewModel((SimpleProduct)currentProductBase)); else if (currentProductBase is ExtendedProduct) viewModels.Add( new ExtendedProductViewModels((ExtendedProduct)currentProductBase)); ... } ...but I consider this very bad practice as this code doesn't follow the object oriented design. The other way round, providing abstract Factory methods would reduce the code to: foreach(ProductBase currentProductBase in resultList) viewModels.Add(currentProductBase.CreateViewModel()) and would be perfectly extensible but since the Model doesn't know the ViewModels, that's not possible. I might bring interfaces into game here, but I haven't seen such approach proven yet. 2. How do i know which View to display when selecting a ViewModel? This is pretty the same problem, but on a higher level. Ended up finally having the desired ObservableCollection<ViewModelBase> collection would require the main view to choose a matching View for the ViewModel. In WPF, there is a DataTemplate concept which can supply a View upon a defined DataType. Unfortunately, this doesn't work in Silverlight and the only replacement I've found was the ResourceSelector of the SLExtensions toolkit which is buggy and not satisfying. Beside that, all problems from Question 1 apply as well. Do you have some hints or even a solution for the problems I describe, which you hopefully can understand from my explanation? Thank you in advance! Thomas

    Read the article

  • How to add an XML parameter to a stored procedure in C#?

    - by salvationishere
    I am developing a C# web application in VS 2008 which interacts with my Adventureworks database in my SQL Server 2008. Now I am trying to add new records to one of the tables which has an XML column in it. How do I do this? This is the error I'm getting: System.Data.SqlClient.SqlException was caught Message="XML Validation: Text node is not allowed at this location, the type was defined with element only content or with simple content. Location: /" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=16 LineNumber=22 Number=6909 Procedure="AppendDataC" Server="." State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at ADONET_namespace.ADONET_methods.AppendDataC(DataRow d, Hashtable ht) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 212 InnerException: And this is a portion of my code in C#: try { SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataC"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; ... sqlParam10.SqlDbType = SqlDbType.VarChar; SqlParameter sqlParam11 = cmd.Parameters.AddWithValue("@" + ht["@col11"], d[10]); sqlParam11.SqlDbType = SqlDbType.VarChar; SqlParameter sqlParam12 = cmd.Parameters.AddWithValue("@" + ht["@col12"], d[11]); sqlParam12.SqlDbType = SqlDbType.Xml; ... conn2.Open(); cmd.ExecuteNonQuery(); //This is the line it fails on and then jumps //to the Catch statement conn2.Close(); errorMsg = "The Person.Contact table was successfully updated!"; } catch (Exception ex) { Right now in my text input MDF file I have the XML parameter as: '<Products><id>3</id><id>6</id><id>15</id></Products>' Is this valid format for XML?

    Read the article

  • NO FBML is working in iframe in facebook using PHP (ie or ff or anywhere else!)

    - by Cyprus106
    I have tried for three days now and gotten nowhere on this.... I absolutely can not get any "fb:" code to render anything! I've tried the exact code in the sandbox and it works fine. I've read through every search result I could find and gotten nowhere... I'm using a standard xd_receiver page, and in the body there's this line: < script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> Here's my index page. It's basically the stock facebook example code.... <?php require_once 'facebook-platform/php/facebook.php'; //Authentication Keys $appapikey = 'MY_KEY'; // obviously this is my real key $appsecret = 'MY_SECRET'; // same thing $facebook = new Facebook($appapikey, $appsecret); $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head></head> <body> <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <? echo "<p>Hello, <fb:name uid=\"$user_id\" useyou=\"false\"></fb:name>!</p>"; ?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?php echo $appapikey; ?>", "xd_receiver.htm"); }); </script> </body> </html> oddly enough, when I put this code below where it echos the logged in user's name, it does show the id numbers of friends. But again, it won't render their names <?php friends.get API method echo "<p>Friends:"; $friends = $facebook->api_client->friends_get(); $friends = array_slice($friends, 0, 25); foreach ($friends as $friend) { echo "<br>".$friend." - <fb:name uid=\".$user_id.\" useyou=\"false\"></fb:name>"; } echo "</p>"; ?> Here's my settings: Canvas Callback URL http://www.my-actual-website.com/test/ Canvas URL http://apps.facebook.com/gogre_testapp/ FBML/iframe iframe Application Type Website Post-Remove URL http://www.my-actual-website.com/test/ Post-Authorize URL http://www.my-actual-website.com/test/ Please, somebody help me out! I've been trying unsuccessfully for days

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • What to Return? Error String, Bool with Error String Out, or Void with Exception

    - by Ranger Pretzel
    I spend most of my time in C# and am trying to figure out which is the best practice for handling an exception and cleanly return an error message from a called method back to the calling method. For example, here is some ActiveDirectory authentication code. Please imagine this Method as part of a Class (and not just a standalone function.) bool IsUserAuthenticated(string domain, string user, string pass, out errStr) { bool authentic = false; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; // No exception thrown? We must be good, then. authentic = true; } catch (Exception e) { errStr = e.Message().ToString(); } return authentic; } The advantages of doing it this way are a clear YES or NO that you can embed right in your If-Then-Else statement. The downside is that it also requires the person using the method to supply a string to get the Error back (if any.) I guess I could overload this method with the same parameters minus the "out errStr", but ignoring the error seems like a bad idea since there can be many reasons for such a failure... Alternatively, I could write a method that returns an Error String (instead of using "out errStr") in which a returned empty string means that the user authenticated fine. string AuthenticateUser(string domain, string user, string pass) { string errStr = ""; try { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } catch (Exception e) { errStr = e.Message().ToString(); } return errStr; } But this seems like a "weak" way of doing things. Or should I just make my method "void" and just not handle the exception so that it gets passed back to the calling function? void AuthenticateUser(string domain, string user, string pass) { // Instantiate Directory Entry object DirectoryEntry entry = new DirectoryEntry("LDAP://" + domain, user, pass); // Force connection over network to authenticate object nativeObject = entry.NativeObject; } This seems the most sane to me (for some reason). Yet at the same time, the only real advantage of wrapping those 2 lines over just typing those 2 lines everywhere I need to authenticate is that I don't need to include the "LDAP://" string. The downside with this way of doing it is that the user has to put this method in a try-catch block. Thoughts? Is there another way of doing this that I'm not thinking of?

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Zaypay alternatives for payments using call or sms

    - by JohannesH
    We are currently trying to implement a payment provider in zaypay for paying for services using sms or by calling a number. We already have google checkout and paypal working for regular payments but zaypay is rather inflexible, poorly documented and a pain to setup when you have hundreds of products with varying prices. So my question is, do you know of any other european payment providers that take sms and call payments? As a response to Roberts answer/question Hi Robert, I must say that the Zaypay solution is the best and only I've seen thus far regarding phoned payments. However, since its now 2 months ago I finished the implementation of our custom Zaypay UI I can't remember much of the the details of the problems we were having. I'll try to give a brief of them anyways the best I can. First of all I would like to see a redirection type scenario for payalogues. From what I remember you guys are using the JS framework "Prototype" which doesn't play nice with jQuery which we are using so we weren't able to use the popup-type scenario supported by payalogues. Furthermore when implementing our custom interface I remember a lot of missing translations, like words that were codes instead of a word or a phrase. This meant we ended up writing/translating all the messages we needed ourselves. Also, another point of annoyance was the setup of prices and items. I wish we could just send in the order items/prices as a part of the interface like you can in Google Checkout or PayPal (not that they're flawless either), instead of having to define ALL the items you will ever sell through your admin interface beforehand. As far as I can remember it is virtually impossible to use Zaypay for a multi-item order in its current form. Finally there are, as far as I can tell, some security issues that you have to think about when you implement a custom solution... especially a ajax driven one. As I said in my original post you do mention this in the documentation but I believe the documentation wasn't that comprehensive regarding security issues. Again I wish I could give more details but the code & client is long since gone, so I can't look up the comments I wrote. Sorry! Oh yeah, the general API documentation weren't exactly comprehensive and 100% correct either. Again, I don't want to advice people against using Zaypay, I just want to advice that they should try it out first on a realistic prototype and think about their implementation before releasing to production. Maybe its just me who misunderstood a lot of things but I generally had a difficult time using your framework and I was left with a feeling that the API was very new and not thought through from the beginning.

    Read the article

  • Job queue manager with RPC interface

    - by admr
    I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server. Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too. I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example. I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial). The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging). I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part. I would be happy to hear suggestions for either the former or latter scenario!

    Read the article

  • JPA Entity Manager resource handling

    - by chiragshahkapadia
    Every time I call JPA method its creating entity and binding query. My persistence properties are: <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider"/> <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.use_query_cache" value="true"/> And I am creating entity manager the way shown below: emf = Persistence.createEntityManagerFactory("pu"); em = emf.createEntityManager(); em = Persistence.createEntityManagerFactory("pu").createEntityManager(); Is there any nice way to manage entity manager resource instead create new every time or any property can set in persistence. Remember it's JPA. See below binding log every time : 15:35:15,527 INFO [AnnotationBinder] Binding entity from annotated class: * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [EntityBinder] Bind entity com.* on table * 15:35:15,542 INFO [HibernateSearchEventListenerRegister] Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 15:35:15,542 INFO [NamingHelper] JNDI InitialContext properties:{} 15:35:15,542 INFO [DatasourceConnectionProvider] Using datasource: 15:35:15,542 INFO [SettingsFactory] RDBMS: and Real Application Testing options 15:35:15,542 INFO [SettingsFactory] JDBC driver: Oracle JDBC driver, version: 9.2.0.1.0 15:35:15,542 INFO [Dialect] Using dialect: org.hibernate.dialect.Oracle10gDialect 15:35:15,542 INFO [TransactionFactoryFactory] Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 15:35:15,542 INFO [TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recomm ended) 15:35:15,542 INFO [SettingsFactory] Automatic flush during beforeCompletion(): disabled 15:35:15,542 INFO [SettingsFactory] Automatic session close at end of transaction: disabled 15:35:15,542 INFO [SettingsFactory] JDBC batch size: 15 15:35:15,542 INFO [SettingsFactory] JDBC batch updates for versioned data: disabled 15:35:15,542 INFO [SettingsFactory] Scrollable result sets: enabled 15:35:15,542 INFO [SettingsFactory] JDBC3 getGeneratedKeys(): disabled 15:35:15,542 INFO [SettingsFactory] Connection release mode: auto 15:35:15,542 INFO [SettingsFactory] Default batch fetch size: 1 15:35:15,542 INFO [SettingsFactory] Generate SQL with comments: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL updates by primary key: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL inserts for batching: disabled 15:35:15,542 INFO [SettingsFactory] Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 15:35:15,542 INFO [ASTQueryTranslatorFactory] Using ASTQueryTranslatorFactory 15:35:15,542 INFO [SettingsFactory] Query language substitutions: {} 15:35:15,542 INFO [SettingsFactory] JPA-QL strict compliance: enabled 15:35:15,542 INFO [SettingsFactory] Second-level cache: enabled 15:35:15,542 INFO [SettingsFactory] Query cache: enabled 15:35:15,542 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 15:35:15,542 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider 15:35:15,542 INFO [SettingsFactory] Optimize cache for minimal puts: disabled 15:35:15,542 INFO [SettingsFactory] Structured second-level cache entries: disabled 15:35:15,542 INFO [SettingsFactory] Query cache factory: org.hibernate.cache.StandardQueryCacheFactory 15:35:15,542 INFO [SettingsFactory] Statistics: disabled 15:35:15,542 INFO [SettingsFactory] Deleted entity synthetic identifier rollback: disabled 15:35:15,542 INFO [SettingsFactory] Default entity-mode: pojo 15:35:15,542 INFO [SettingsFactory] Named query checking : enabled 15:35:15,542 INFO [SessionFactoryImpl] building session factory 15:35:15,542 INFO [SessionFactoryObjectFactory] Not binding factory to JNDI, no JNDI name configured 15:35:15,542 INFO [UpdateTimestampsCache] starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache 15:35:15,542 INFO [StandardQueryCache] starting query cache at region: org.hibernate.cache.StandardQueryCache

    Read the article

  • Problem on jboss lookup entitymanager

    - by Stefano
    I have my ear-project deployed in jboss 5.1GA. From webapp i don't have problem, the lookup of my ejb3 work fine! es: ShoppingCart sc= (ShoppingCart) (new InitialContext()).lookup("idelivery-ear-1.0/ShoppingCartBean/remote"); also the iniection of my EntityManager work fine! @PersistenceContext private EntityManager manager; From test enviroment (I use Eclipse) the lookup of the same ejb3 work fine! but the lookup of entitymanager or PersistenceContext don't work!!! my good test case: public void testClient() { Properties properties = new Properties(); properties.put("java.naming.factory.initial","org.jnp.interfaces.NamingContextFactory"); properties.put("java.naming.factory.url.pkgs","org.jboss.naming:org.jnp.interfaces"); properties.put("java.naming.provider.url","localhost"); Context context; try{ context = new InitialContext(properties); ShoppingCart cart = (ShoppingCart) context.lookup("idelivery-ear-1.0/ShoppingCartBean/remote"); // WORK FINE } catch (Exception e) { e.printStackTrace(); } } my bad test : EntityManagerFactory emf = Persistence.createEntityManagerFactory("idelivery"); EntityManager em = emf.createEntityManager(); //test1 EntityManager em6 = (EntityManager) new InitialContext().lookup("java:comp/env/persistence/idelivery"); //test2 PersistenceContext em3 = (PersistenceContext)(new InitialContext()).lookup("idelivery/remote"); //test3 my persistence.xml <persistence-unit name="idelivery" transaction-type="JTA"> <jta-data-source>java:ideliveryDS</jta-data-source> <properties> <property name="hibernate.hbm2ddl.auto" value="create-drop" /><!--validate | update | create | create-drop--> <property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect" /> <property name="hibernate.show_sql" value="true" /> <property name="hibernate.format_sql" value="true" /> </properties> </persistence-unit> my datasource: <datasources> <local-tx-datasource> <jndi-name>ideliveryDS</jndi-name> ... </local-tx-datasource> </datasources> I need EntityManager and PersistenceContext to test my query before build ejb... Where is my mistake?

    Read the article

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >