Search Results

Search found 10433 results on 418 pages for 'session replication'.

Page 185/418 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • FluentNHibernate Unit Of Work / Repository Design Pattern Questions

    - by Echiban
    Hi all, I think I am at a impasse here. I have an application I built from scratch using FluentNHibernate (ORM) / SQLite (file db). I have decided to implement the Unit of Work and Repository Design pattern. I am at a point where I need to think about the end game, which will start as a WPF windows app (using MVVM) and eventually implement web services / ASP.Net as UI. Now I already created domain objects (entities) for ORM. And now I don't know how should I use it outside of ORM. Questions about it include: Should I use ORM entity objects directly as models in MVVM? If yes, do I put business logic (such as certain values must be positive and be greater than another Property) in those entity objects? It is certainly the simpler approach, and one I am leaning right now. However, will there be gotchas that would trash this plan? If the answer above is no, do I then create a new set of classes to implement business logic and use those as Models in MVVM? How would I deal with the transition between model objects and entity objects? I guess a type converter implementation would work well here. Now I followed this well written article to implement the Unit Of Work pattern. However, due to the fact that I am using FluentNHibernate instead of NHibernate, I had to bastardize the implementation of UnitOfWorkFactory. Here's my implementation: using System; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace ELau.BlindsManagement.Business { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static readonly string DbFilename; private static Configuration _configuration; private static ISession _currentSession; private ISessionFactory _sessionFactory; static UnitOfWorkFactory() { // arbitrary default filename DbFilename = "defaultBlindsDb.db3"; } internal UnitOfWorkFactory() { } #region IUnitOfWorkFactory Members public ISession CurrentSession { get { if (_currentSession == null) { throw new InvalidOperationException(ExceptionStringTable.Generic_NotInUnitOfWork); } return _currentSession; } set { _currentSession = value; } } public ISessionFactory SessionFactory { get { if (_sessionFactory == null) { _sessionFactory = BuildSessionFactory(); } return _sessionFactory; } } public Configuration Configuration { get { if (_configuration == null) { Fluently.Configure().ExposeConfiguration(c => _configuration = c); } return _configuration; } } public IUnitOfWork Create() { ISession session = CreateSession(); session.FlushMode = FlushMode.Commit; _currentSession = session; return new UnitOfWorkImplementor(this, session); } public void DisposeUnitOfWork(UnitOfWorkImplementor adapter) { CurrentSession = null; UnitOfWork.DisposeUnitOfWork(adapter); } #endregion public ISession CreateSession() { return SessionFactory.OpenSession(); } public IStatelessSession CreateStatelessSession() { return SessionFactory.OpenStatelessSession(); } private static ISessionFactory BuildSessionFactory() { ISessionFactory result = Fluently.Configure() .Database( SQLiteConfiguration.Standard .UsingFile(DbFilename) ) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<UnitOfWorkFactory>()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); return result; } private static void BuildSchema(Configuration config) { // this NHibernate tool takes a configuration (with mapping info in) // and exports a database schema from it _configuration = config; new SchemaExport(_configuration).Create(false, true); } } } I know that this implementation is flawed because a few tests pass when run individually, but when all tests are run, it would fail for some unknown reason. Whoever wants to help me out with this one, given its complexity, please contact me by private message. I am willing to send some $$$ by Paypal to someone who can address the issue and provide solid explanation. I am new to ORM, so any assistance is appreciated.

    Read the article

  • Tomcat 6, JPA and Data sources

    - by asrijaal
    Hi there, I'm trying to get a data source working in my jsf app. I defined the data source in my web-apps context.xml webapp/META-INF/context.xml <?xml version="1.0" encoding="UTF-8"?> <Context antiJARLocking="true" path="/Sale"> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="20" maxIdle="10" maxWait="-1" name="Sale" password="admin" type="javax.sql.DataSource" url="jdbc:mysql://localhost/sale" username="admin"/> </Context> web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/welcomeJSF.jsp</welcome-file> </welcome-file-list> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>ruby</param-value> </context-param> </web-app> and my persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="SalePU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <non-jta-data-source>Sale</non-jta-data-source> <class>org.comp.sale.AnfrageAnhang</class> <class>org.comp.sale.Beschaffung</class> <class>org.comp.sale.Konto</class> <class>org.comp.sale.Anfrage</class> <exclude-unlisted-classes>false</exclude-unlisted-classes> </persistence-unit> </persistence> But no data source seems to be created by Tomcat, I only get this exception Exception [TOPLINK-7060] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: Cannot acquire data source [Sale]. Internal Exception: javax.naming.NameNotFoundException: Name Sale is not bound in this Context The needed jars for the MySQL driver are included into the WEB-INF/lib dir. What I'm doing wrong?

    Read the article

  • Spring MVC Project's .war file not able to deploy using JRE in jetty server

    - by PDKumar
    IDE: STS(Eclipse) Server: Jetty-distribution-8.1.15.v20140411 I have created a SpringsMVC Project using Template available in STS tool(New-Springs Project- Springs MVC Project). I generated a war file(SpringsMVC.war) and placed it in /webapps folder of Jetty server. Now I started Jetty using JRE's 'java' , D:\jetty-distribution-8.1.15.v20140411"C:\Program Files (x86)\Java\jre7\bin\java" -jar start.jar Now when I tried to access my application in browser, it shows the below error; HTTP ERROR 500 Problem accessing /SpringMVCProject1/. Reason: Server Error Caused by: org.apache.jasper.JasperException: PWC6345: There is an error in invoking javac. A full JDK (not just JRE) is required at org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:92) at org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:378) at org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:119) at org.apache.jasper.compiler.Jsr199JavaCompiler.compile(Jsr199JavaCompiler.java:208) at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:384) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:453) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:625) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:492) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:378) at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:575) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:276) at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103) at org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:238) at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:262) at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1180) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:950) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778) at javax.servlet.http.HttpServlet.service(HttpServlet.java:735) at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:533) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Unknown Source) But if I use JDK's java, war file gets deployed and output displayed perfectly. D:\jetty-distribution-8.1.15.v20140411"C:\Program Files (x86)\Java\jdk1.7.0_55\bin\java" -jar start.jar Hello world! The time on the server is August 20, 2014 3:42:53 PM IST. Please tell is it not possible to use JRE to execute a "SpringsMVCProject"?

    Read the article

  • FolderClosed Exception in Javamail

    - by SikhWarrior
    Im trying to create a simple mail client in android, and I have the android version of javamail compiling and running in my app. However, whenever I try to connect and receive mail, I get a Folder Closed exception seen below. 10-23 12:12:13.484: W/System.err(6660): javax.mail.FolderClosedException 10-23 12:12:13.484: W/System.err(6660): at com.sun.mail.imap.IMAPMessage.getProtocol(IMAPMessage.java:149) 10-23 12:12:13.484: W/System.err(6660): at com.sun.mail.imap.IMAPMessage.loadBODYSTRUCTURE(IMAPMessage.java:1262) 10-23 12:12:13.484: W/System.err(6660): at com.sun.mail.imap.IMAPMessage.getDataHandler(IMAPMessage.java:616) 10-23 12:12:13.484: W/System.err(6660): at javax.mail.internet.MimeMessage.getContent(MimeMessage.java:1398) 10-23 12:12:13.484: W/System.err(6660): at com.teamzeta.sfu.Util.MailHelper.getMessageHTML(MailHelper.java:60) 10-23 12:12:13.484: W/System.err(6660): at com.teamzeta.sfu.GetAsyncEmails.onPostExecute(EmailActivity.java:31) 10-23 12:12:13.484: W/System.err(6660): at com.teamzeta.sfu.GetAsyncEmails.onPostExecute(EmailActivity.java:1) 10-23 12:12:13.484: W/System.err(6660): at android.os.AsyncTask.finish(AsyncTask.java:631) 10-23 12:12:13.484: W/System.err(6660): at android.os.AsyncTask.access$600(AsyncTask.java:177) 10-23 12:12:13.484: W/System.err(6660): at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:644) 10-23 12:12:13.484: W/System.err(6660): at android.os.Handler.dispatchMessage(Handler.java:99) 10-23 12:12:13.484: W/System.err(6660): at android.os.Looper.loop(Looper.java:137) 10-23 12:12:13.484: W/System.err(6660): at android.app.ActivityThread.main(ActivityThread.java:5227) 10-23 12:12:13.484: W/System.err(6660): at java.lang.reflect.Method.invokeNative(Native Method) 10-23 12:12:13.484: W/System.err(6660): at java.lang.reflect.Method.invoke(Method.java:511) 10-23 12:12:13.484: W/System.err(6660): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:795) 10-23 12:12:13.484: W/System.err(6660): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:562) 10-23 12:12:13.494: W/System.err(6660): at dalvik.system.NativeStart.main(Native Method) My code is as follows: public static Message[] getAllMail(String user, String pwd){ String host = "imap.sfu.ca"; final Message[] NO_MESSAGES = {}; Properties properties = System.getProperties(); properties.setProperty("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); properties.setProperty("mail.imap.socketFactory.port", "993"); Session session = Session.getDefaultInstance(properties); try { Store store = session.getStore("imap"); store.connect(host, user, pwd); Folder folder = store.getFolder("inbox"); folder.open(Folder.READ_ONLY); Message[] messages = folder.getMessages(); folder.close(true); store.close(); Log.d("####TEAM ZETA DEBUG####", "Content: " + messages.length); return messages; } catch (NoSuchProviderException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (MessagingException e) { // TODO Auto-generated catch block e.printStackTrace(); } Log.d("####TEAM ZETA DEBUG####", "Returning NO_MESSAGES"); return NO_MESSAGES; } public static String getMessageHTML(Message message){ Object msgContent; try { msgContent = message.getContent(); if (msgContent instanceof Multipart) { Multipart mp = (Multipart) msgContent; for (int i = 0; i < mp.getCount(); i++) { BodyPart bp = mp.getBodyPart(i); if (Pattern .compile(Pattern.quote("text/html"), Pattern.CASE_INSENSITIVE) .matcher(bp.getContentType()).find()) { // found html part return (String) bp.getContent(); } else { // some other bodypart... } } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (MessagingException e) { // TODO Auto-generated catch block e.printStackTrace(); } return "Something went wrong"; } I couldn't find anything helpful on the web, does anyone have an ideas why this is happening?? This is called in class GetAsyncEmails extends AsyncTask<String, Integer, Message[]>{ @Override protected Message[] doInBackground(String... args) { // TODO Auto-generated method stub Message[] messages = MailHelper.getAllMail(args[0], args[1]); return messages; } protected void onPostExecute(Message[] result){ if(result.length > 1){ Message message = result[0]; String content = MailHelper.getMessageHTML(message); System.out.println("####TEAM ZETA DEBUG####" + content); } } }

    Read the article

  • Saving child collections with NHibernate

    - by Ben
    Hi, I am in the process or learning NHibernate so bare with me. I have an Order class and a Transaction class. Order has a one to many association with transaction. The transaction table in my database has a not null constraint on the OrderId foreign key. Order class: public class Order { public virtual Guid Id { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual decimal Total { get; set; } public virtual ICollection<Transaction> Transactions { get; set; } public Order() { Transactions = new HashSet<Transaction>(); } } Order Mapping: <class name="Order" table="Orders"> <cache usage="read-write"/> <id name="Id"> <generator class="guid"/> </id> <property name="CreatedOn" type="datetime"/> <property name="Total" type="decimal"/> <set name="Transactions" table="Transactions" lazy="false" inverse="true"> <key column="OrderId"/> <one-to-many class="Transaction"/> </set> Transaction Class: public class Transaction { public virtual Guid Id { get; set; } public virtual DateTime ExecutedOn { get; set; } public virtual bool Success { get; set; } public virtual Order Order { get; set; } } Transaction Mapping: <class name="Transaction" table="Transactions"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="ExecutedOn" type="datetime"/> <property name="Success" type="bool"/> <many-to-one name="Order" class="Order" column="OrderId" not-null="true"/> Really I don't want a bidirectional association. There is no need for my transaction objects to reference their order object directly (I just need to access the transactions of an order). However, I had to add this so that Order.Transactions is persisted to the database: Repository: public void Update(Order entity) { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Update(entity); foreach (var tx in entity.Transactions) { tx.Order = entity; session.SaveOrUpdate(tx); } transaction.Commit(); } } } My problem is that this will then issue an update for every transaction on the order collection (regardless of whether it has changed or not). What I was trying to get around was having to explicitly save the transaction before saving the order and instead just add the transactions to the order and then save the order: public void Can_add_transaction_to_existing_order() { var orderRepo = new OrderRepository(); var order = orderRepo.GetById(new Guid("aa3b5d04-c5c8-4ad9-9b3e-9ce73e488a9f")); Transaction tx = new Transaction(); tx.ExecutedOn = DateTime.Now; tx.Success = true; order.Transactions.Add(tx); orderRepo.Update(order); } Although I have found quite a few articles covering the set up of a one-to-many association, most of these discuss retrieving of data and not persisting back. Many thanks, Ben

    Read the article

  • (PHP) User is being forced to RE-LOGIN after trying to do something on an admin page

    - by hatorade
    I have created an admin panel for a client in PHP, which requires a login. Here is the code at the top of the admin page requiring the user to be logged in: admin.php <?php session_start(); require("_lib/session_functions.php"); require("_lib/db.php"); db_connect(); //if the user has not logged in if(!isLoggedIn()) { header('Location: login_form.php'); die(); } ?> Obviously, the if statement is what catches them and forces them to log in. Here is the code on the resulting login page: login_form.php <form name="login" action="login.php" method="post"> Username: <input type="text" name="username" /> Password: <input type="password" name="password" /> <input type="submit" value="Login" /> </form> Which posts info to this controller page: login.php <?php session_start(); //must call session_start before using any $_SESSION variables include '_lib/session_functions.php'; $username = $_POST['username']; $password = $_POST['password']; include '_lib/db.php'; db_connect(); // Connect to the DB $username = mysql_real_escape_string($username); $query = "SELECT password, salt FROM users WHERE username = '$username';"; $result = mysql_query($query); if(mysql_num_rows($result) < 1) //no such user exists { header('Location: login_form.php?login=fail'); die(); } $userData = mysql_fetch_array($result, MYSQL_ASSOC); db_disconnect(); $hash = hash('sha256', $password . $userData['salt']); if($hash != $userData['password']) //incorrect password { header('Location: login_form.php?login=fail'); die(); } else { validateUser(); //sets the session data for this user } header('Location: admin.php'); ?> and the session functions page that provides login functions contains this: session_functions.php <?php function validateUser() { session_regenerate_id (); //this is a security measure $_SESSION['valid'] = 1; $_SESSION['userid'] = $username; } function isLoggedIn() { if($_SESSION['valid']) return true; return false; } function logout() { $_SESSION = array(); //destroy all of the session variables if (ini_get("session.use_cookies")) { $params = session_get_cookie_params(); setcookie(session_name(), '', time() - 42000, $params["path"], $params["domain"], $params["secure"], $params["httponly"] ); } session_destroy(); } ?> I grabbed the sessions_functions.php code of an online tutorial, so it could be suspicious. Any ideas why the user logs in to the admin panel, tries to do something, is forced to re-login, and THEN is allowed to do stuff like normal in the admin panel?

    Read the article

  • Is this a right way to use NHibernate?

    - by Venemo
    I spent the rest of the evening reading StackOverflow questions and also some blog entries and links about the subject. All of them turned out to be very helpful, but I still feel that they don't really answer my question. So, I'm developing a simple web application. I'd like to create a reusable data access layer which I can later reuse in other solutions. 99% of these will be web applications. This seems to be a good excuse for me to learn NHibernate and some of the patterns around it. My goals are the following: I don't want the business logic layer to know ANYTHING about the inner workings of the database, nor NHibernate itself. I want the business logic layer to have the least possible number of assumptions about the data access layer. I want the data access layer as simplistic and easy-to-use as possible. This is going to be a simple project, so I don't want to overcomplicate anything. I want the data access layer to be as non-intrusive as possible. Will all this in mind, I decided to use the popular repository pattern. I read about this subject on this site and on various dev blogs, and I heard some stuff about the unit of work pattern. I also looked around and checked out various implementations. (Including FubuMVC contrib, and SharpArchitecture, and stuff on some blogs.) I found out that most of these operate with the same principle: They create a "unit of work" which is instantiated when a repository is instantiated, they start a transaction, do stuff, and commit, and then start all over again. So, only one ISession per Repository and that's it. Then the client code needs to instantiate a repository, do stuff with it, and then dispose. This usage pattern doesn't meet my need of being as simplistic as possible, so I began thinking about something else. I found out that NHibernate already has something which makes custom "unit of work" implementations unnecessary, and that is the CurrentSessionContext class. If I configure the session context correctly, and do the clean up when necessary, I'm good to go. So, I came up with this: I have a static class called NHibernateHelper. Firstly, it has a static property called CurrentSessionFactory, which upon first call, instantiates a session factory and stores it in a static field. (One ISessionFactory per one AppDomain is good enough.) Then, more importantly, it has a CurrentSession static property, which checks if there is an ISession bound to the current session context, and if not, creates one, and binds it, and it returns with the ISession bound to the current session context. Because it will be used mostly with WebSessionContext (so, one ISession per HttpRequest, although for the unit tests, I configured ThreadStaticSessionContext), it should work seamlessly. And after creating and binding an ISession, it hooks an event handler to the HttpContext.Current.ApplicationInstance.EndRequest event, which takes care of cleaning up the ISession after the request ends. (Of course, it only does this if it is really running in a web environment.) So, with all this set up, the NHibernateHelper will always be able to return a valid ISession, so there is no need to instantiate a Repository instance for the "unit of work" to operate properly. Instead, the Repository is a static class which operates with the ISession from the NHibernateHelper.CurrentSession property, and exposes some functionality through that. I'm curious, what do you think about this? Is it a valid way of thinking, or am I completely off track here?

    Read the article

  • php, mySQL & AJAX: Unable to use sessions across the scripts in the same domain

    - by Devner
    Hi all, I have the following pages: page1.php, page2.php and page3.php. Code in each of them is as below CODE: page1.php <script type="text/javascript"> $(function(){ $('#imgID').upload({ submit_to_url: "page2.php", file_name: 'myfile1', description : "Image", limit : 1, file_types : "*.jpg", }) }); </script> <body> <form action="page3.php" method="post" enctype="multipart/form-data" name="frm1" id="frm1"> //Some other text fields <input type="submit" name="submit" id="submit" value="Submit" /> </form> </body> page2.php <?php session_start(); $a = $_SESSION['a']; $b = $_SESSION['b']; $c = $_SESSION['c']; $res = mysql_query("SELECT col FROM table WHERE col1 = $a AND col2 = $b AND col3 = $c LIMIT 1"); $num_rows = mysql_num_rows($res); echo $num_rows; //echos 0 when in fact it should have been 1 because the data in the Session exists. //Ok let's proceed further //... Do some stuff... //Store some more values and create new session variables (and assume that page1.php is going to be able to use it) $_SESSION['d'] = 'd'; $_SESSION['e'] = 'e'; $_SESSION['f'] = 'f'; if (move_uploaded_file($_FILES['file']['tmp_name'], $file)) { echo "success"; } else { echo "error ".$_FILES['file']['error']; } ?> page3.php <?php session_start(); if( isset($_POST['submit']) ) { //These sessions are non-existent although the AJAX request //to page2.php may have created them when called via AJAX from within page1.php echo $_SESSION['d'].$_SESSION['e'].$_SESSION['f']; ?> } ?> As the code says it I am posting some info via AJAX call from page1.php to page2.php. page2.php is supposed to be able to use the session values from page1.php i.e. $_SESSION['a'], $_SESSION['b'] and $_SESSION['c'] but it does not. Why? How can I fix this? page2.php is creating some more sessions after some processing is done and a response is sent back to page1.php. The submit button of the form on page1.php is hit and the page gets POST'ed to page3.php. But when the SESSION info that gets created in page2.php is echoed, it's blank signifying that SESSIONS from page2.php are not used. How can I fix this? I looked over a lot of information and have spent about 50 hours trying to do different things with my scripts before arriving at the above conclusions. My app. is custom made using function (not OOPS) and does not use any PHP frameworks & I am not even about to use any as my knowledge of OOP concepts is limited any many frameworks are object oriented. I came across race conditions, but the solutions provided don't help too much. One more solution of using DB to hold sessions and seek and retrieve from DB is the last thing on my mind and I really want to avoid creating table, coding and maintaining code for a task as simple as just keeping sessions across pages in the same domain. So my request is: Is there a way that I can solve the above problem(s) via simple coding in present conditions? Any help is appreciated. Thank you.

    Read the article

  • Creating a second login page that automatically logs in the user

    - by nsilva
    I have a login page as follows: <form action="?" method="post" id="frm-useracc-login" name="frm-useracc-login" > <div id="login-username-wrap" > <div class="login-input-item left"> <div class="div-search-label left"> <div id="div-leftheader-wrap"> <p class="a-topheader-infotext left"><strong>Username: </strong></p> </div> </div> <div class="login-input-content left div-subrow-style ui-corner-all"> <input type="text" tabindex="1" name="txt-username" id="txt-username" class="input-txt-med required addr-search-input txt-username left"> </div> </div> </div> <div id="login-password-wrap" > <div class="login-input-item left"> <div class="div-search-label left"> <div id="div-leftheader-wrap"> <p class="a-topheader-infotext left"><strong>Password: </strong></p> </div> </div> <div class="login-input-content left div-subrow-style ui-corner-all"> <input type="password" tabindex="1" name="txt-password" id="txt-password" class="input-txt-med required addr-search-input txt-password left"> </div> </div> </div> <div id="login-btn-bottom" class="centre-div"> <div id="login-btn-right"> <button name="btn-login" id="btn-login" class="btn-med ui-button ui-state-default ui-button-text-only ui-corner-all btn-hover-anim btn-row-wrapper left">Login</button> <button name="btn-cancel" id="btn-cancel" class="btn-med ui-button ui-state-default ui-button-text-only ui-corner-all btn-hover-anim btn-row-wrapper left">Cancel</button><br /><br /> </div> </div> </form> And here my session.controller.php file: Click Here Basically, what I want to do is create a second login page that automatically passes the value to the session controller and logs in. For example, if I go to login-guest.php, I would put the default values for username and password and then have a jquery click event that automatically logs them in using $("#btn-login").trigger('click'); The problem is that the session controller automatically goes back to login.php if the session has timed out and I'm not sure how I could go about achieving this. Any help would be much appreciated!

    Read the article

  • MVC Persist Collection ViewModel (Update, Delete, Insert)

    - by Riccardo Bassilichi
    In order to create a more elegant solution I'm curios to know your suggestion about a solution to persist a collection. I've a collection stored on DB. This collection go to a webpage in a viewmodel. When the go back from the webpage to the controller I need to persist the modified collection to the same DB. The simple solution is to delete the stored collection and recreate all rows. I need a more elegant solution to mix the collections and delete not present record, update similar records ad insert new rows. this is my Models and ViewModels. public class CustomerModel { public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual IList<PreferredAirportModel> PreferedAirports { get; set; } } public class AirportModel { public virtual string Id { get; set; } public virtual string AirportName { get; set; } } public class PreferredAirportModel { public virtual AirportModel Airport { get; set; } public virtual int CheckInMinutes { get; set; } } // ViewModels public class CustomerViewModel { [Required] public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual IList<PreferredAirporViewtModel> PreferedAirports { get; set; } } public class PreferredAirporViewtModel { [Required] public virtual string AirportId { get; set; } [Required] public virtual int CheckInMinutes { get; set; } } And this is the controller with not elegant solution. public class CustomerController { public ActionResult Save(string id, CustomerViewModel viewModel) { var session = SessionFactory.CurrentSession; var customer = session.Query<CustomerModel>().SingleOrDefault(el => el.Id == id); customer.Name = viewModel.Name; // How cai I Merge collections handling delete, update and inserts ? var modifiedPreferedAirports = new List<PreferredAirportModel>(); var modifiedPreferedAirportsVm = new List<PreferredAirporViewtModel>(); // Update every common Airport foreach (var airport in viewModel.PreferedAirports) { foreach (var custPa in customer.PreferedAirports) { if (custPa.Airport.Id == airport.AirportId) { modifiedPreferedAirports.Add(custPa); modifiedPreferedAirportsVm.Add(airport); custPa.CheckInMinutes = airport.CheckInMinutes; } } } // Remove common airports from ViewModel modifiedPreferedAirportsVm.ForEach(el => viewModel.PreferedAirports.Remove(el)); // Remove deleted airports from model var toDelete = customer.PreferedAirports.Except(modifiedPreferedAirports); toDelete.ForEach(el => customer.PreferedAirports.Remove(el)); // Add new Airports var toAdd = viewModel.PreferedAirports.Select(el => new PreferredAirportModel { Airport = session.Query<AirportModel>(). SingleOrDefault(a => a.Id == el.AirportId), CheckInMinutes = el.CheckInMinutes }); toAdd.ForEach(el => customer.PreferedAirports.Add(el)); session.Save(customer); return View(); } } My environment is ASP.NET MVC 4, nHibernate, Automapper, SQL Server. Thank You!!

    Read the article

  • About global.asax and the events there

    - by eski
    So what i'm trying to understand is the whole global.asax events. I doing a simple counter that records website visits. I am using MSSQL. Basicly i have two ints. totalNumberOfUsers - The total visist from begining. currentNumberOfUsers - Total of users viewing the site at the moment. So the way i understand global.asax events is that every time someone comes to the site "Session_Start" is fired once. So once per user. "Application_Start" is fired only once the first time someone comes to the site. Going with this i have my global.asax file here. <script runat="server"> string connectionstring = ConfigurationManager.ConnectionStrings["ConnectionString1"].ConnectionString; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup Application.Lock(); Application["currentNumberOfUsers"] = 0; Application.UnLock(); string sql = "Select c_hit from v_counter where (id=1)"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); cmd.Connection.Open(); cmd.ExecuteNonQuery(); SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { Application.Lock(); Application["totalNumberOfUsers"] = reader.GetInt32(0); Application.UnLock(); } reader.Close(); cmd.Connection.Close(); } void Application_End(object sender, EventArgs e) { // Code that runs on application shutdown } void Application_Error(object sender, EventArgs e) { // Code that runs when an unhandled error occurs } void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started Application.Lock(); Application["totalNumberOfUsers"] = (int)Application["totalNumberOfUsers"] + 1; Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] + 1; Application.UnLock(); string sql = "UPDATE v_counter SET c_hit = @hit WHERE c_type = 'totalNumberOfUsers'"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); SqlParameter hit = new SqlParameter("@hit", SqlDbType.Int); hit.Value = Application["totalNumberOfUsers"]; cmd.Parameters.Add(hit); cmd.Connection.Open(); cmd.ExecuteNonQuery(); cmd.Connection.Close(); } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. // Note: The Session_End event is raised only when the sessionstate mode // is set to InProc in the Web.config file. If session mode is set to StateServer // or SQLServer, the event is not raised. Application.Lock(); Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] - 1; Application.UnLock(); } </script> In the page_load i have this protected void Page_Load(object sender, EventArgs e) { l_current.Text = Application["currentNumberOfUsers"].ToString(); l_total.Text = Application["totalNumberOfUsers"].ToString(); } So if i understand this right, every time someone comes to the site both the currentNumberOfUsers and totalNumberOfUsers are incremented with 1. But when the session is over the currentNumberOfUsers is decremented with 1. If i go to the site with 3 types of browsers with the same computer i should have 3 in hits on both counters. Doing this again after hours i should have 3 in current and 6 in total, right ? The way its working right now is the current goes up to 2 and the total is incremented on every postback on IE and Chrome but not on firefox. And one last thing, is this the same thing ? Application["value"] = 0; value = Application["value"] //OR Application.Set("Value", 0); Value = Application.Get("Value");

    Read the article

  • Hibernate - strange order of native SQL parameters

    - by Xorty
    Hello, I am trying to use native MySQL's MD5 crypto func, so I defined custom insert in my mapping file. <hibernate-mapping package="tutorial"> <class name="com.xorty.mailclient.client.domain.User" table="user"> <id name="login" type="string" column="login"></id> <property name="password"> <column name="password" /> </property> <sql-insert>INSERT INTO user (login,password) VALUES ( ?, MD5(?) )</sql-insert> </class> </hibernate-mapping> Then I create User (pretty simple POJO with just 2 Strings - login and password) and try to persist it. session.beginTransaction(); // we have no such user in here yet User junitUser = (User) session.load(User.class, "junit_user"); assert (null == junitUser); // insert new user junitUser = new User(); junitUser.setLogin("junit_user"); junitUser.setPassword("junitpass"); session.save(junitUser); session.getTransaction().commit(); What actually happens? User is created, but with reversed parameters order. He has login "junitpass" and "junit_user" is MD5 encrypted and stored as password. What did I wrong? Thanks EDIT: ADDING POJO class package com.xorty.mailclient.client.domain; import java.io.Serializable; /** * POJO class representing user. * @author MisoV * @version 0.1 */ public class User implements Serializable { /** * Generated UID */ private static final long serialVersionUID = -969127095912324468L; private String login; private String password; /** * @return login */ public String getLogin() { return login; } /** * @return password */ public String getPassword() { return password; } /** * @param login the login to set */ public void setLogin(String login) { this.login = login; } /** * @param password the password to set */ public void setPassword(String password) { this.password = password; } /** * @see java.lang.Object#toString() * @return login */ @Override public String toString() { return login; } /** * Creates new User. * @param login User's login. * @param password User's password. */ public User(String login, String password) { setLogin(login); setPassword(password); } /** * Default constructor */ public User() { } /** * @return hashCode * @see java.lang.Object#hashCode() */ @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((null == login) ? 0 : login.hashCode()); result = prime * result + ((null == password) ? 0 : password.hashCode()); return result; } /** * @param obj Compared object * @return True, if objects are same. Else false. * @see java.lang.Object#equals(java.lang.Object) */ @Override public boolean equals(Object obj) { if (this == obj) { return true; } if (obj == null) { return false; } if (!(obj instanceof User)) { return false; } User other = (User) obj; if (login == null) { if (other.login != null) { return false; } } else if (!login.equals(other.login)) { return false; } if (password == null) { if (other.password != null) { return false; } } else if (!password.equals(other.password)) { return false; } return true; } }

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

  • method is not called from xhtml

    - by Amlan Karmakar
    Whenever I am clicking the h:commandButton,the method associated with the action is not called.action="${statusBean.update}" is not working, the update is not being called. 1) Here is my xhtml page <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.org/ui"> <h:head></h:head> <h:body> <h:form > <p:dataList value="#{statusBean.statusList}" var="p"> <h:outputText value="#{p.statusId}-#{p.statusmsg}"/><br/> <p:inputText value="#{statusBean.comment.comment}"/> <h:commandButton value="comment" action="${statusBean.update}"></h:commandButton> </p:dataList> </h:form> </h:body> </html> 2)Here is my statusBean package com.bean; import java.util.List; import javax.faces.context.FacesContext; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.Persistence; import javax.persistence.Query; import javax.servlet.http.HttpSession; import com.entity.Album; import com.entity.Comment; import com.entity.Status; import com.entity.User; public class StatusBean { Comment comment; Status status; private EntityManager em; public Comment getComment() { return comment; } public void setComment(Comment comment) { this.comment = comment; } public Status getStatus() { return status; } public void setStatus(Status status) { this.status = status; } public StatusBean(){ comment = new Comment(); status=new Status(); EntityManagerFactory emf=Persistence.createEntityManagerFactory("FreeBird"); em =emf.createEntityManager(); } public String save(){ FacesContext context = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) context.getExternalContext().getSession(true); User user = (User) session.getAttribute("userdet"); status.setEmail(user.getEmail()); System.out.println("status save called"); em.getTransaction().begin(); em.persist(status); em.getTransaction().commit(); return "success"; } public List<Status> getStatusList(){ FacesContext context = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) context.getExternalContext().getSession(true); User user=(User) session.getAttribute("userdet"); Query query = em.createQuery("SELECT s FROM Status s WHERE s.email='"+user.getEmail()+"'", Status.class); List<Status> results =query.getResultList(); return results; } public String update(){ System.out.println("Update Called..."); //comment.setStatusId(Integer.parseInt(statusId)); em.getTransaction().begin(); em.persist(comment); em.getTransaction().commit(); return "success"; } }

    Read the article

  • JPA : optimize EJB-QL query involving large many-to-many join table

    - by Fabien
    Hi all. I'm using Hibernate Entity Manager 3.4.0.GA with Spring 2.5.6 and MySql 5.1. I have a use case where an entity called Artifact has a reflexive many-to-many relation with itself, and the join table is quite large (1 million lines). As a result, the HQL query performed by one of the methods in my DAO takes a long time. Any advice on how to optimize this and still use HQL ? Or do I have no choice but to switch to a native SQL query that would perform a join between the table ARTIFACT and the join table ARTIFACT_DEPENDENCIES ? Here is the problematic query performed in the DAO : @SuppressWarnings("unchecked") public List<Artifact> findDependentArtifacts(Artifact artifact) { Query query = em.createQuery("select a from Artifact a where :artifact in elements(a.dependencies)"); query.setParameter("artifact", artifact); List<Artifact> list = query.getResultList(); return list; } And the code for the Artifact entity : package com.acme.dependencytool.persistence.model; import java.util.ArrayList; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.persistence.UniqueConstraint; @Entity @Table(name = "ARTIFACT", uniqueConstraints={@UniqueConstraint(columnNames={"GROUP_ID", "ARTIFACT_ID", "VERSION"})}) public class Artifact { @Id @GeneratedValue @Column(name = "ID") private Long id = null; @Column(name = "GROUP_ID", length = 255, nullable = false) private String groupId; @Column(name = "ARTIFACT_ID", length = 255, nullable = false) private String artifactId; @Column(name = "VERSION", length = 255, nullable = false) private String version; @ManyToMany(cascade=CascadeType.ALL, fetch=FetchType.EAGER) @JoinTable( name="ARTIFACT_DEPENDENCIES", joinColumns = @JoinColumn(name="ARTIFACT_ID", referencedColumnName="ID"), inverseJoinColumns = @JoinColumn(name="DEPENDENCY_ID", referencedColumnName="ID") ) private List<Artifact> dependencies = new ArrayList<Artifact>(); public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getGroupId() { return groupId; } public void setGroupId(String groupId) { this.groupId = groupId; } public String getArtifactId() { return artifactId; } public void setArtifactId(String artifactId) { this.artifactId = artifactId; } public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } public List<Artifact> getDependencies() { return dependencies; } public void setDependencies(List<Artifact> dependencies) { this.dependencies = dependencies; } } Thanks in advance. EDIT 1 : The DDLs are generated automatically by Hibernate EntityMananger based on the JPA annotations in the Artifact entity. I have no explicit control on the automaticaly-generated join table, and the JPA annotations don't let me explicitly set an index on a column of a table that does not correspond to an actual Entity (in the JPA sense). So I guess the indexing of table ARTIFACT_DEPENDENCIES is left to the DB, MySQL in my case, which apparently uses a composite index based on both clumns but doesn't index the column that is most relevant in my query (DEPENDENCY_ID). mysql describe ARTIFACT_DEPENDENCIES; +---------------+------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------+------+-----+---------+-------+ | ARTIFACT_ID | bigint(20) | NO | MUL | NULL | | | DEPENDENCY_ID | bigint(20) | NO | MUL | NULL | | +---------------+------------+------+-----+---------+-------+ EDIT 2 : When turning on showSql in the Hibernate session, I see many occurences of the same type of SQL query, as below : select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=? Here's what EXPLAIN in MySql says about this type of query : mysql explain select dependenci0_.ARTIFACT_ID as ARTIFACT1_1_, dependenci0_.DEPENDENCY_ID as DEPENDENCY2_1_, artifact1_.ID as ID1_0_, artifact1_.ARTIFACT_ID as ARTIFACT2_1_0_, artifact1_.GROUP_ID as GROUP3_1_0_, artifact1_.VERSION as VERSION1_0_ from ARTIFACT_DEPENDENCIES dependenci0_ left outer join ARTIFACT artifact1_ on dependenci0_.DEPENDENCY_ID=artifact1_.ID where dependenci0_.ARTIFACT_ID=1; +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ | 1 | SIMPLE | dependenci0_ | ref | FKEA2DE763364D466 | FKEA2DE763364D466 | 8 | const | 159 | | | 1 | SIMPLE | artifact1_ | eq_ref | PRIMARY | PRIMARY | 8 | dependencytooldb.dependenci0_.DEPENDENCY_ID | 1 | | +----+-------------+--------------+--------+-------------------+-------------------+---------+---------------------------------------------+------+-------+ EDIT 3 : I tried setting the FetchType to LAZY in the JoinTable annotation, but I then get the following exception : Hibernate: select artifact0_.ID as ID1_, artifact0_.ARTIFACT_ID as ARTIFACT2_1_, artifact0_.GROUP_ID as GROUP3_1_, artifact0_.VERSION as VERSION1_ from ARTIFACT artifact0_ where artifact0_.GROUP_ID=? and artifact0_.ARTIFACT_ID=? 51545 [btpool0-2] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.acme.dependencytool.persistence.model.Artifact.dependencies, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:93) at com.acme.dependencytool.server.DependencyToolServiceImpl.createArtifactViewBean(DependencyToolServiceImpl.java:109) at com.acme.dependencytool.server.DependencyToolServiceImpl.search(DependencyToolServiceImpl.java:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:527) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:166) at com.google.gwt.user.server.rpc.RemoteServiceServlet.doPost(RemoteServiceServlet.java:86) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:205) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

    Read the article

  • how to version minder for web application data

    - by dankyy1
    hi all;I'm devoloping a web application which renders data from DB and also updates datas with editor UI Pages.So i want to implement a versioning mechanism for render pages got data over db again if only data on db updated by editor pages.. I decided to use Session objects for the version information that client had taken latestly.And the Application object that the latest DB version of objects ,i used the data objects guid as key for each data item client version holder class like below ItemRunnigVersionInformation class holds currentitem guid and last loadtime from DB public class ClientVersionManager { public static List<ItemRunnigVersionInformation> SessionItemRunnigVersionInformation { get { if (HttpContext.Current.Session["SessionItemRunnigVersionInformation"] == null) HttpContext.Current.Session["SessionItemRunnigVersionInformation"] = new List<ItemRunnigVersionInformation>(); return (List<ItemRunnigVersionInformation>)HttpContext.Current.Session["SessionItemRunnigVersionInformation"]; } set { HttpContext.Current.Session["SessionItemRunnigVersionInformation"] = value; } } /// <summary> /// this will be updated when editor pages /// </summary> /// <param name="itemRunnigVersionInformation"></param> public static void UpdateItemRunnigSessionVersion(string itemGuid) { ItemRunnigVersionInformation itemRunnigVersionAtAppDomain = PlayListVersionManager.GetItemRunnigVersionInformationByID(itemGuid); ItemRunnigVersionInformation itemRunnigVersionInformationAtSession = SessionItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemGuid); if ((itemRunnigVersionInformationAtSession == null) && (itemRunnigVersionAtAppDomain != null)) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(SessionItemRunnigVersionInformation, itemRunnigVersionAtAppDomain); } else if (itemRunnigVersionAtAppDomain != null) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Remove(SessionItemRunnigVersionInformation, itemRunnigVersionInformationAtSession); ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(SessionItemRunnigVersionInformation, itemRunnigVersionAtAppDomain); } } /// <summary> /// by given parameters check versions over PlayListVersionManager versions and /// adds versions to clientversion manager if any item version on /// playlist not found it will also added to PlaylistManager list /// </summary> /// <param name="playList"></param> /// <param name="userGuid"></param> /// <param name="ownerGuid"></param> public static void UpdateCurrentSessionVersion(PlayList playList, string userGuid, string ownerGuid) { ItemRunnigVersionInformation tmpItemRunnigVersionInformation; List<ItemRunnigVersionInformation> currentItemRunnigVersionInformationList = new List<ItemRunnigVersionInformation>(); if (!string.IsNullOrEmpty(userGuid)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(userGuid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(userGuid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } if (!string.IsNullOrEmpty(ownerGuid)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(ownerGuid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(ownerGuid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } if ((playList != null) && (playList.PlayListItemCollection != null)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(playList.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(playList.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } currentItemRunnigVersionInformationList.Add(tmpItemRunnigVersionInformation); foreach (PlayListItem playListItem in playList.PlayListItemCollection) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(playListItem.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(playListItem.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } currentItemRunnigVersionInformationList.Add(tmpItemRunnigVersionInformation); foreach (SoftKey softKey in playListItem.PlayListSoftKeys) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(softKey.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(softKey.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } foreach (MenuItem menuItem in playListItem.MenuItems) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(menuItem.Guid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(menuItem.Guid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } } } SessionItemRunnigVersionInformation = currentItemRunnigVersionInformationList; } public static ItemRunnigVersionInformation GetItemRunnigVersionInformationById(string itemGuid) { return SessionItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemGuid); } public static void DeleteItemRunnigAppDomain(string itemGuid) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Remove(SessionItemRunnigVersionInformation, NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(SessionItemRunnigVersionInformation, itemGuid)); } } and that was for server one public class PlayListVersionManager { public static List<ItemRunnigVersionInformation> AppDomainItemRunnigVersionInformation { get { if (HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] == null) HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] = new List<ItemRunnigVersionInformation>(); return (List<ItemRunnigVersionInformation>)HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"]; } set { HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] = value; } } public static ItemRunnigVersionInformation GetItemRunnigVersionInformationByID(string itemGuid) { return ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemGuid); } /// <summary> /// this will be updated when editor pages /// if any record at playlistversion is found it will be addedd /// </summary> /// <param name="itemRunnigVersionInformation"></param> public static void UpdateItemRunnigAppDomainVersion(ItemRunnigVersionInformation itemRunnigVersionInformation) { ItemRunnigVersionInformation itemRunnigVersionInformationAtAppDomain = NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation.ItemGuid); if (itemRunnigVersionInformationAtAppDomain == null) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); } else { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Remove(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformationAtAppDomain); ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); } } //this will be checked each time if needed to update item over DB public static bool IsRunnigItemLastVersion(ItemRunnigVersionInformation itemRunnigVersionInformation, bool ignoreNullEntry, out bool itemNotExistsAtAppDomain) { itemNotExistsAtAppDomain = false; if (itemRunnigVersionInformation != null) { ItemRunnigVersionInformation itemRunnigVersionInformationAtAppDomain = AppDomainItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemRunnigVersionInformation.ItemGuid); itemNotExistsAtAppDomain = (itemRunnigVersionInformationAtAppDomain == null); if (itemNotExistsAtAppDomain && (ignoreNullEntry)) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); return true; } else if (!itemNotExistsAtAppDomain && (itemRunnigVersionInformationAtAppDomain.LastLoadTime <= itemRunnigVersionInformation.LastLoadTime)) return true; else return false; } else return ignoreNullEntry; } public static void DeleteItemRunnigAppDomain(string itemGuid) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Remove(AppDomainItemRunnigVersionInformation, NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemGuid)); } } when more than one client requests the page i got "Collection was modified; enumeration operation may not execute." like below.. xception: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) at System.Collections.Generic.List1.Enumerator.MoveNextRare() at System.Collections.Generic.List1.Enumerator.MoveNext() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable1 source, Func2 predicate) at NG.IPTOffice.Paloma.Helper.PlayListVersionManager.UpdateItemRunnigAppDomainVersion(ItemRunnigVersionInformation itemRunnigVersionInformation) in at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.playlistwebform_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\ipservicestest\8921e5c8\5d09c94d\App_Web_n4qdnfcq.2.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)----------- how to implement version management like this scnerio? how can i to avoid this exception? thnx

    Read the article

  • Spring's EntityManager not persisting

    - by Fernando Camargo
    Well, my project was using EJB and JPA (with Hibernate), but I had to switch to Spring. Everything was working well before that. The EJB used to inject the EntityManager, controled the transaction, etc. Ok, when I switched to Spring, I had a lot of problems because I'm new on Spring. But after everything is running, I have the problem: the data is never saved on database. I configured my Spring to control the transactions, I have spring beans used in JSF, that has spring services that do the hard work. This services have a EntityManager injected and use @Transactional REQUIRED. This services pass the EntityManager to a DAO that call entityManager.persist(bean). The selects appears to work well, the JTA transaction appears to work well to (I saw in log), but the entity is not saved! Here is the log: INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter: doFilterInternal() (linha 136): Opening JPA EntityManager in OpenEntityManagerInViewFilter INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.beans.factory.support.DefaultListableBeanFactory: doGetBean() (linha 245): Returning cached instance of singleton bean 'transactionManager' INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: getTransaction() (linha 365): Creating new transaction with name [br.org.cni.pronatec.controller.service.MontanteServiceImpl.adicionarValor]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; '' INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 493): Opened new Session [org.hibernate.impl.SessionImpl@2b2fe2f0] for Hibernate transaction INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 504): Preparing JDBC Connection of Hibernate Session [org.hibernate.impl.SessionImpl@2b2fe2f0] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doBegin() (linha 569): Exposing Hibernate transaction as JDBC transaction [com.sun.gjc.spi.jdbc40.ConnectionHolder40@3bcd4840] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerInvocationHandler: doJoinTransaction() (linha 383): Joined JTA transaction INFO: Hibernate: select hibernate_sequence.nextval from dual INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: processCommit() (linha 752): Initiating transaction commit INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doCommit() (linha 652): Committing Hibernate transaction on Session [org.hibernate.impl.SessionImpl@2b2fe2f0] INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.HibernateTransactionManager: doCleanupAfterCompletion() (linha 734): Closing Hibernate Session [org.hibernate.impl.SessionImpl@2b2fe2f0] after transaction INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.hibernate3.SessionFactoryUtils: closeSession() (linha 800): Closing Hibernate Session INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter: doFilterInternal() (linha 154): Closing JPA EntityManager in OpenEntityManagerInViewFilter INFO: [Pronatec] - 04/04/2012 11:30:20 - [DEBUG] org.springframework.orm.jpa.EntityManagerFactoryUtils: closeEntityManager() (linha 343): Closing JPA EntityManager In the log, I see it commiting the transaction, but I don't see the insert query (the Hibernate is printing any query). I also see that the Hibernate lookup to get the next value of the sequence ID. But after that, it never really inserts. Here is the spring context configuration: <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="PronatecPU" /> <property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml" /> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" > <property name="transactionManagerName" value="java:/TransactionManager" /> <property name="userTransactionName" value="UserTransaction" /> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> <tx:annotation-driven transaction-manager="transactionManager" /> Here is my persistence.xml: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="PronatecPU" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jdbc/pronatec</jta-data-source> <class>br.org.cni.pronatec.model.bean.AgendamentoBuscaSistec</class> <class>br.org.cni.pronatec.model.bean.AgendamentoExportacaoZeus</class> <class>br.org.cni.pronatec.model.bean.AgendamentoImportacaoZeus</class> <class>br.org.cni.pronatec.model.bean.Aluno</class> <class>br.org.cni.pronatec.model.bean.Curso</class> <class>br.org.cni.pronatec.model.bean.DepartamentoRegional</class> <class>br.org.cni.pronatec.model.bean.Dof</class> <class>br.org.cni.pronatec.model.bean.Escola</class> <class>br.org.cni.pronatec.model.bean.Inconsistencia</class> <class>br.org.cni.pronatec.model.bean.Matricula</class> <class>br.org.cni.pronatec.model.bean.Montante</class> <class>br.org.cni.pronatec.model.bean.ParametrosVingentes</class> <class>br.org.cni.pronatec.model.bean.TipoCurso</class> <class>br.org.cni.pronatec.model.bean.Turma</class> <class>br.org.cni.pronatec.model.bean.UnidadeFederativa</class> <class>br.org.cni.pronatec.model.bean.ValorAssistenciaEstudantil</class> <class>br.org.cni.pronatec.model.bean.ValorHora</class> <exclude-unlisted-classes>true</exclude-unlisted-classes> <properties> <property name="current_session_context_class" value="thread"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.OracleDialect"/> <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.SunONETransactionManagerLookup"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence> Here is my service that is injected in the managed bean: @Service @Scope("prototype") @Transactional(propagation= Propagation.REQUIRED) public class MontanteServiceImpl { // more code @PersistenceContext(unitName="PronatecPU", type= PersistenceContextType.EXTENDED) private EntityManager entityManager; // more code // The method that is called by another public method that do something before private void salvarMontante(Montante montante) { montante.setDataTransacao(new Date()); MontanteDao montanteDao = new MontanteDao(entityManager); montanteDao.salvar(montante); } // more code } My MontanteDao inherits from a base DAO, like this: public class MontanteDao extends BaseDao<Montante> { public MontanteDao(EntityManager entityManager) { super(entityManager); } } And the method that is called in BaseDao is this: public void salvar(T bean) { entityManager.persist(bean); } Like you can see, it just pick the injected entityManager and call the persist() method. The transaction is being controlled by the Spring, like is printed in the log, but the insert query is never printed in log and it is never saved. I'm sorry about my bad english. Thanks in advance for who helps.

    Read the article

  • SQL Pre-Con…at the Beach

    - by Argenis
      Building upon the success of SQL Rally 2012 (where we packed a room full of DBAs), my friend Robert Davis [Twitter|Blog] and yours truly will be again delivering our day-long Pre-Conference “Demystifying Database Administration Best Practices” this Friday (6/8/2012) – right before SQLSaturday #132 in Pensacola, FL. If you are in the vicinity of Pensacola, come join us! We had tons of fun at Rally. Robert and I love sharing tips and stories that will help you on your day to day duties as a DBA. Some of the topics that we’ll touch on (this is by no means a comprehensive list) Active Directory configuration for SQL Server Deployments Windows Server Deployments Storage and I/O High Availability / Disaster Recovery / Business Continuity Replication Day-To-Day Operations Maintenance TempDB Code Reviews Other Database and Server Settings   Follow this link to sign up for the Pre-Con at Pensacola: http://demystifyingdba.eventbrite.com/ Here’s a blog post that Robert made on the subject of Best Practices.  Hope to see you there!

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL Pre-Con…at the Beach

    - by Argenis
      Building upon the success of SQL Rally 2012 (where we packed a room full of DBAs), my friend Robert Davis [Twitter|Blog] and yours truly will be again delivering our day-long Pre-Conference “Demystifying Database Administration Best Practices” this Friday (6/8/2012) – right before SQLSaturday #132 in Pensacola, FL. If you are in the vicinity of Pensacola, come join us! We had tons of fun at Rally. Robert and I love sharing tips and stories that will help you on your day to day duties as a DBA. Some of the topics that we’ll touch on (this is by no means a comprehensive list) Active Directory configuration for SQL Server Deployments Windows Server Deployments Storage and I/O High Availability / Disaster Recovery / Business Continuity Replication Day-To-Day Operations Maintenance TempDB Code Reviews Other Database and Server Settings   Follow this link to sign up for the Pre-Con at Pensacola: http://demystifyingdba.eventbrite.com/ Here’s a blog post that Robert made on the subject of Best Practices.  Hope to see you there!

    Read the article

  • MySQL 5.5 is GA!

    - by rob.young(at)oracle.com
    It is my pleasure to announce that MySQL 5.5 is now GA and ready for production deployment.  You can read Oracle's official press release here. I am excited about 5.5 because of the performance and scalability gains, new replication enhancements and overall improved technical efficiencies.  Congratulations and a sincere "Thanks!" go out to the entire MySQL Community and product engineering teams for making 5.5 the best release of MySQL to date.Please join us for today's MySQL Technology Update webcast where Tomas Ulin and I will cover what's new in MySQL 5.5 and provide an update on the other technologies we are working on. You can download MySQL 5.5 here.  All of the documentation and what's new information is here.  There is also a great article on MySQL 5.5 and the MySQL community here.Thanks for reading, and as always, THANKS for your support of MySQL!

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • Winners of the Oracle Excellence Award—Eco-Enterprise Innovation

    - by Evelyn Neumayr
    Did you get a chance to attend Oracle OpenWorld in San Francisco? With 60,000 attendees and hundreds of sessions to choose from—there was a lot going on. One of my favorite sessions was the Eco-Enterprise Awards and Sustainability Executive Panel Discussion. During this session, Jeff Henley, Oracle Chairman of the Board, announced the winners of the 2013 Oracle Excellence Award—Eco-Enterprise Innovation. It was an enlightening session as we heard several of the winning customers discuss the importance of sustainability to their company and how they’re using various Oracle products to help with their sustainability initiatives. The winning customers include: Centennial Coal, Indaver nv, Korea Enterprise Data, National Guard Health Affairs, Schneider National, SThree, Telstra International Group, Trex Company, University of Salzburg, Walmart, and Yeoncheon County Office. Stay tuned for additional blogs where you’ll learn more about these winning companies’ environmental best practices and why they won this award. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Several partners were also recognized for helping these customers with their sustainability initiatives. Those partners include: CSS International, Daesang Information Technology, i4BI, Infosys, Knowledge Global, Solutions for Retails Brands Limited, and SysGen. During this same session, Jeff Henley also awarded Robert Kaplan, Director of Sustainability at Walmart, with Oracle’s Chief Sustainability Officer of the Year award. Robert was honored for helping improve Walmart’s supply chain efficiency with their Sustainability Hub. The Sustainability Hub, powered by Oracle Service Cloud, is a central location for Walmart suppliers, associates and business partners to learn, connect, inspire and drive sustainability through collaboration. While at Oracle OpenWorld, I also got a chance to hear Robert Kaplan discuss their Sustainability Hub during an Oracle OpenWorld Live taping. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >