Search Results

Search found 9517 results on 381 pages for 'session evaluations'.

Page 229/381 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Announcing Oracle Solaris 11.1

    - by Larry Wake
    This morning, we announced Oracle Solaris 11.1, the first update to Oracle Solaris 11.This builds on all the things we've done to make Oracle Solaris the best operating system for enterprise cloud computing, so no surprises on what we've focused on: enhancements for cloud infrastructure, extreme availability for enterprise applications, and continued payoff from our co-engineering work with the rest of the Oracle software portfolio. You can learn more by visiting oracle.com/solaris, and our Oracle Technology Network Oracle Solaris 11.1 page. If you're at Oracle OpenWorld, be sure to attend Solaris engineering VP Markus Flierl's general session at 10:15 today, in Moscone South 103, where he'll be going into detail on Oracle Solaris 11.1 . And, be sure to sign up for our online launch event on November 7th, featuring Markus, fellow engineering VP Bill Nesheim, and a deep bench of Solaris engineers. It's hard to believe that it's been 20 years since Solaris 2.0 first shipped -- stay tuned for the next 20!

    Read the article

  • They Got What They Came For

    - by Oracle OpenWorld Blog Team
    At this week's Oracle OpenWorld Latin America, over 11,000 attendees got what they came for. Interesting and absorbing keynotes. Educational technical sessions. Innovative demos and exhibits. Inspiring hands-on labs. And many, many opportunities to network with and build their community of peers. If you weren't able to attend the conference, or if you did attend but want to review some of the sessions or share them with colleagues, session PDFs will be posted soon, so be sure to check back. 

    Read the article

  • Where to find GUI code

    - by muffinz
    I've been rummaging through Unity's source code (Shell Interface) and I was a little curious about something; where in the code are you supposed to find positional code? I'll clarify a bit with some examples. How do you find in the code what tells the Launcher to sit on the left side of the screen? Where in the code does it tell the "Session" button on the panel (top) to sit at the very right of the screen? I guess my real question is how do I find this out for myself? I've looked through a big portion of the source code and can't find anything related to the actual position of these items, only their sub-items like text-align. Any guidance on this would be much appreciated.

    Read the article

  • How does one get Bluetile on 12.04?

    - by JKN
    I've been working on getting bluetile working on 12.04 so I can start down the path of tiling windows managers, but I have been having limited success. I have read the (only 7!) other posts on bluetile and tried the website, but unfortunately nothing seems to address precise thoroughly enough for me to get it working. I have tried getting gnome running on my machine via apt-get and following basic instructions from there, but without success. The apt-get for bluetile also fails (on selecting the gnome-bluetile session option, unity ends up opening anyway in a buggy, unstable way). I also messed around with specifying a custom xsession for lightdm to look at and setting the window-manager to bluetile, but again without success- somehow I ended up in unity again. Apologies if this question is too vague- I am new to really using linux systems, so sometimes I don't know what to ask or look for. Thanks!

    Read the article

  • 12.04 Login gone wrong

    - by Mark H
    I seem to have a fault in the login procedure somewhere. When I boooted up I found that if I selected Guest I could use the computer. If I selected my administrator account I was taken straight to the terminal. So I opened a guest session, went to user settings, and unlocked my admin account (it accepted the password!), and amended it to show "password None" and automatic login. I now find that when I boot up I am taken straight to the terminal - if I exit terminal I get the login screen - if I select the administrator login I go back to terminal - if i select guest I can use the PC, and if I select mu normal user account I can use the PC. So I cannot login as an administrator - so admin functions such as update are no longer accessible. Sorry to be so long winde but I am stuck. Can anyone suggest anything - I am a beginner with this

    Read the article

  • Terminal echo issue

    - by user107602
    I've been using Ubuntu 10.04 LTS for a while, am quite new, using the terminal, made a script to open a project of mine containing multiple files with gedit - after execution of the respective script - gedit [filename1] [filename2] ... , terminal executes it successfully, gedit opens passed files and terminal is ready for another line. Well, today I came across a strange issue - after the execution of the above mentioned script, gedit initiates successfully, but terminal denies execution of commands and echoes all keyboard events, even specific ctrl+... functions - all until gedit is closed. I can't figure what caused this as my recent activity was focused around a C project, not regarding the terminal in any way. I recall being able to execute another line after initiating e.g. open gedit and compile a project within a single tab and session of a terminal window. Any help would be appreciated! Regards!

    Read the article

  • I need to get past my permissions to recover data

    - by adsmz
    Due to some mishaps, I am unable to boot into Kubuntu at all. However, my data is still on the hard drive. I managed to get one of the other two computers to which I have access to read the disk by booting into a liveCD session of kubuntu. The only storage medium to which I have access is a 30 GB data stick. Here's where the trouble starts: In music alone, I have to back up about 60 GB. Obviously this is going to have to be split into chunks and moved over to the second spare PC until I can reinstall Kubuntu on my laptop. All of the data that needs backed up is behind a permissions wall, so while I can view it, I can't interact with it directly. I know copying and moving through the terminal can get around this with sudo cp or sudo mv, but is there a way to first compress multiple folders in a single archive, then move it? (While we're on the subject, what compression method would be best for large volumes of music in MP3, WAV, and OGG format?)

    Read the article

  • Reload Gtk+ 3.0 theme

    - by eagleoneraptor
    I'm trying to customize my Gtk+3.0 theme, when I make a change in my theme, I change between two themes (with MyUnity) to force applications to reload the theme and test mine. But when I do that, the theme is not refreshed to appreciate my changes, still in an old version, apparently Gtk+ is caching the theme information somewhere. When I close and reopen session, I can appreciate my theme changes, but doing this for each change made is very annoying. Is there a way to refresh my theme and see my changes (a command line program or an Gtk+ API call for example)?

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • c++ class member functions instatiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (vc++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • C++ class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • What is the role and purpose of a bootstrapper?

    - by ForeverDebugging
    I'm working on an application which uses a bootscrapper object to perform some operations when the application first starts. e.g. registers IoC objects, puts certain variables into the asp.net Application session object, does some security checks etc. I had a look around the internet and couldn't find a reference to a bootscrapper pattern, or any article about the subject. Is this a known pattern under a different name?

    Read the article

  • Twitter bootstrap + asp.net masterpages, how to set navbar item as active when user selects it?

    - by ase69s
    We are in se same situation as question Make Twitter Bootstrap navbar link active, but in our case we are using ASP.net and MasterPages... The thing is the navbar is defined at the masterpage and when you click a menuitem you are redirected to the corresponding child page so how would you do to change the navbar active item consecuently without replicating the logic in each child page? (Preferably without session variables and javascript only at master page)

    Read the article

  • Spring Security - Interactive login attempt was unsuccessful

    - by Taylor L
    I've been trying to track down why Spring Security isn't creating the SPRING_SECURITY_REMEMBER_ME_COOKIE so I turned on logging for org.springframework.security.web.authentication.rememberme. At first glance, the logs make it seem like the login is failing but the login is actually successful in the sense that if I navigate to a page that requires authentication I am not redirected back to the login page. However, the logs appear to be saying the login credentials are invalid. Any ideas as to what is going on? Mar 16, 2010 10:05:56 AM org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices onLoginSuccess FINE: Creating new persistent login for user [email protected] Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices loginFail FINE: Interactive login attempt was unsuccessful. Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices cancelCookie FINE: Cancelling cookie <http auto-config="false"> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/img/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/app/admin/**" filters="none" /> <intercept-url pattern="/app/login/**" filters="none" /> <intercept-url pattern="/app/register/**" filters="none" /> <intercept-url pattern="/app/error/**" filters="none" /> <intercept-url pattern="/" filters="none" /> <intercept-url pattern="/**" access="ROLE_USER" /> <logout logout-success-url="/" /> <form-login login-page="/app/login" default-target-url="/" authentication-failure-url="/app/login?login_error=1" /> <session-management invalid-session-url="/app/login" /> <remember-me services-ref="rememberMeServices" key="myKey" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="sha-256" base64="true"> <salt-source user-property="username" /> </password-encoder> </authentication-provider> </authentication-manager> <beans:bean id="userDetailsService" class="com.my.service.auth.UserDetailsServiceImpl" /> <beans:bean id="rememberMeServices" class="org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices"> <beans:property name="userDetailsService" ref="userDetailsService" /> <beans:property name="tokenRepository" ref="persistentTokenRepository" /> <beans:property name="key" value="myKey" /> </beans:bean> <beans:bean id="persistentTokenRepository" class="com.my.service.auth.PersistentTokenRepositoryImpl" />

    Read the article

  • Convert JSON to HashMap using Gson in Java

    - by mridang
    Hi, I'm requesting data from a server which returns data in the JSON format. Casting a HashMap into JSON when making the request wasn't hard at all but the other way seems to be a little tricky. The JSON response looks like this: { "header" : { "alerts" : [ { "AlertID" : "2", "TSExpires" : null, "Target" : "1", "Text" : "woot", "Type" : "1" }, { "AlertID" : "3", "TSExpires" : null, "Target" : "1", "Text" : "woot", "Type" : "1" } ], "session" : "0bc8d0835f93ac3ebbf11560b2c5be9a" }, "result" : "4be26bc400d3c" } What way would be easiest to access this data? I'm using the GSON module. Cheers.

    Read the article

  • How do you code against CSRF malicious requests?

    - by user355950
    how to Decline malicious requests.... Cross-Site Request Forgery Severity: Medium Test Type: Application Remediation Tasks: Decline malicious requests Reasoning: The same request was sent twice in different sessions and the same response was received. This shows that none of the parameters are dynamic (session identifiers are sent only in cookies) and therefore that the application is vulnerable to this issue.

    Read the article

  • How to set up default schema name in JPA configuration?

    - by Roman
    I found that in hibernate config file we could set up parameter hibernate.default_schema: <hibernate-configuration> <session-factory> ... <property name="hibernate.default_schema">myschema</property> ... </session-factory> </hibernate-configuration> Now I'm using JPA and I want to do the same. Otherwise I have to add parameter schema to each @Table annotation like: @Entity @Table (name = "projectcategory", schema = "SCHEMANAME") public class Category implements Serializable { ... } As I understand this parameter should be somewhere in this part of configuration: <bean id="domainEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="JiraManager"/> <property name="dataSource" ref="domainDataSource"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="generateDdl" value="false"/> <property name="showSql" value="false"/> <property name="databasePlatform" value="${hibernate.dialect}"/> </bean> </property> </bean> <bean id="domainDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${db.driver}" /> <property name="jdbcUrl" value="${datasource.url}" /> <property name="user" value="${datasource.username}" /> <property name="password" value="${datasource.password}" /> <property name="initialPoolSize" value="5"/> <property name="minPoolSize" value="5"/> <property name="maxPoolSize" value="15"/> <property name="checkoutTimeout" value="10000"/> <property name="maxStatements" value="150"/> <property name="testConnectionOnCheckin" value="true"/> <property name="idleConnectionTestPeriod" value="50"/> </bean> ... but I can't find its name in google. Any ideas?

    Read the article

  • NHibernate.MappingException: No persister for:

    - by Sara Chipps
    Now, before you say it I DID google and my hbm.xml file IS an Embedded Resource. Here is the code I am calling: ISession session = GetCurrentSession(); var returnObject = session.Get<T>(Id); Here is my mapping file for the class: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="HQData.Objects.SubCategory, HQData" table="SubCategory" lazy="true"> <id name="ID" column="ID" unsaved-value="0"> <generator class="identity" /> </id> <property name="Name" column="Name" /> <property name="NumberOfBuckets" column="NumberOfBuckets" /> <property name="SearchCriteriaOne" column="SearchCriteriaOne" /> <bag name="_Businesses" cascade="all"> <key column="SubCategoryId"/> <one-to-many class="HQData.Objects.Business, HQData"/> </bag> <bag name="_Buckets" cascade="all"> <key column="SubCategoryId"/> <one-to-many class="HQData.Objects.Bucket, HQData"/> </bag> </class> </hibernate-mapping> Has anyone run to this issue before? I swore that was it after I read it, but no dice. Here is the rest of the error and thanks for your help. MappingException: No persister for: HQData.Objects.SubCategory]NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName, Boolean throwIfNotFound) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:766 NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:752 NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Event\Default\DefaultLoadEventListener.cs:37 NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:2054 NHibernate.Impl.SessionImpl.Get(String entityName, Object id) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:1029 NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:1020 NHibernate.Impl.SessionImpl.Get(Object id) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:985 HQData.DataAccessUtils.NHibernateObjectHelper.LoadDataObject(Int32 Id) in C:\Development\HQChannelRepo\HQ Channel Application\HQChannel\HQData\DataAccessUtils\NHibernateObjectHelper.cs:42 HQWebsite.LocalSearch.get_subCategory() in C:\Development\HQChannelRepo\HQ Channel Application\HQChannel\HQWebsite\LocalSearch.aspx.cs:17 HQWebsite.LocalSearch.Page_Load(Object sender, EventArgs e) in C:\Development\HQChannelRepo\HQ Channel Application\HQChannel\HQWebsite\LocalSearch.aspx.cs:27 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +15 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +47 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1436 I had changed some code and I wasn't adding the Assembly to the config file during runtime. Thanks for your help This has been fixed, I am not F-ing with my NHibernate setup ever again!

    Read the article

  • Linux e1000e (Intel networking driver) problems galore, where do I start?

    - by Evan Carroll
    I'm currently having a major problem with e1000e (not working at all) in Ubuntu Maverick (1.0.2-k4), after resume I'm getting a lot of stuff in dmesg: [ 9085.820197] e1000e 0000:02:00.0: PCI INT A disabled [ 9089.907756] e1000e: Intel(R) PRO/1000 Network Driver - 1.0.2-k4 [ 9089.907762] e1000e: Copyright (c) 1999 - 2009 Intel Corporation. [ 9089.907797] e1000e 0000:02:00.0: Disabling ASPM L1 [ 9089.907827] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9089.907857] e1000e 0000:02:00.0: setting latency timer to 64 [ 9089.908529] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9089.908922] e1000e 0000:02:00.0: Disabling ASPM L0s [ 9089.908954] e1000e 0000:02:00.0: (unregistered net_device): PHY reset is blocked due to SOL/IDER session. [ 9090.024625] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:0a:e4:3e:ce:74 [ 9090.024630] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection [ 9090.024712] e1000e 0000:02:00.0: eth0: MAC: 2, PHY: 2, PBA No: 005302-003 [ 9090.109492] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9090.164219] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X and, a bunch of [ 2128.005447] e1000e 0000:02:00.0: eth0: Detected Hardware Unit Hang: [ 2128.005452] TDH <89> [ 2128.005454] TDT <27> [ 2128.005456] next_to_use <27> [ 2128.005458] next_to_clean <88> [ 2128.005460] buffer_info[next_to_clean]: [ 2128.005463] time_stamp <6e608> [ 2128.005465] next_to_watch <8a> [ 2128.005467] jiffies <6f929> [ 2128.005469] next_to_watch.status <0> [ 2128.005471] MAC Status <80080703> [ 2128.005473] PHY Status <796d> [ 2128.005475] PHY 1000BASE-T Status <4000> [ 2128.005477] PHY Extended Status <3000> [ 2128.005480] PCI Status <10> I decided to compile the latest stable e1000e to 1.2.17, now I'm getting: [ 9895.678050] e1000e: Intel(R) PRO/1000 Network Driver - 1.2.17-NAPI [ 9895.678055] e1000e: Copyright(c) 1999 - 2010 Intel Corporation. [ 9895.678098] e1000e 0000:02:00.0: Disabling ASPM L1 [ 9895.678129] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9895.678162] e1000e 0000:02:00.0: setting latency timer to 64 [ 9895.679136] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9895.679160] e1000e 0000:02:00.0: Disabling ASPM L0s [ 9895.679192] e1000e 0000:02:00.0: (unregistered net_device): PHY reset is blocked due to SOL/IDER session. [ 9895.791758] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:0a:e4:3e:ce:74 [ 9895.791766] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection [ 9895.791850] e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 2, PBA No: 005302-003 [ 9895.892464] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9895.948175] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9895.949111] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 9895.954694] e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: RX/TX [ 9895.954703] e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO [ 9895.955157] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 9906.832056] eth0: no IPv6 routers present With 1.2.20 I get: [ 9711.525465] e1000e: Intel(R) PRO/1000 Network Driver - 1.2.20-NAPI [ 9711.525472] e1000e: Copyright(c) 1999 - 2010 Intel Corporation. [ 9711.525521] e1000e 0000:02:00.0: Disabling ASPM L1 [ 9711.525554] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9711.525586] e1000e 0000:02:00.0: setting latency timer to 64 [ 9711.526460] e1000e 0000:02:00.0: irq 45 for MSI/MSI-X [ 9711.526487] e1000e 0000:02:00.0: Disabling ASPM L0s [ 9711.526523] e1000e 0000:02:00.0: (unregistered net_device): PHY reset is blocked due to SOL/IDER session. [ 9711.639763] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:0a:e4:3e:ce:74 [ 9711.639771] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection [ 9711.639854] e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 2, PBA No: 005302-003 [ 9712.060770] e1000e 0000:02:00.0: irq 45 for MSI/MSI-X [ 9712.116195] e1000e 0000:02:00.0: irq 45 for MSI/MSI-X [ 9712.117098] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 9712.122684] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [ 9712.122693] e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO [ 9712.123142] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 9722.920014] eth0: no IPv6 routers present But, I'm still getting these [ 9982.992851] PCI Status <10> [ 9984.993602] e1000e 0000:02:00.0: eth0: Detected Hardware Unit Hang: [ 9984.993606] TDH <5d> [ 9984.993608] TDT <6b> [ 9984.993611] next_to_use <6b> [ 9984.993613] next_to_clean <5b> [ 9984.993615] buffer_info[next_to_clean]: [ 9984.993617] time_stamp <24da80> [ 9984.993619] next_to_watch <5d> [ 9984.993621] jiffies <24f200> [ 9984.993624] next_to_watch.status <0> [ 9984.993626] MAC Status <80080703> [ 9984.993628] PHY Status <796d> [ 9984.993630] PHY 1000BASE-T Status <4000> [ 9984.993632] PHY Extended Status <3000> [ 9984.993635] PCI Status <10> [ 9986.001047] e1000e 0000:02:00.0: eth0: Reset adapter [ 9986.176202] e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: RX/TX [ 9986.176211] e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO I'm not sure where to start troubleshooting this. Any ideas? Here is the result of ethtool -d eth0 MAC Registers ------------- 0x00000: CTRL (Device control register) 0x18100248 Endian mode (buffers): little Link reset: reset Set link up: 1 Invert Loss-Of-Signal: no Receive flow control: enabled Transmit flow control: enabled VLAN mode: disabled Auto speed detect: disabled Speed select: 1000Mb/s Force speed: no Force duplex: no 0x00008: STATUS (Device status register) 0x80080703 Duplex: full Link up: link config TBI mode: disabled Link speed: 10Mb/s Bus type: PCI Express Port number: 0 0x00100: RCTL (Receive control register) 0x04048002 Receiver: enabled Store bad packets: disabled Unicast promiscuous: disabled Multicast promiscuous: disabled Long packet: disabled Descriptor minimum threshold size: 1/2 Broadcast accept mode: accept VLAN filter: enabled Canonical form indicator: disabled Discard pause frames: filtered Pass MAC control frames: don't pass Receive buffer size: 2048 0x02808: RDLEN (Receive desc length) 0x00001000 0x02810: RDH (Receive desc head) 0x00000001 0x02818: RDT (Receive desc tail) 0x000000F0 0x02820: RDTR (Receive delay timer) 0x00000000 0x00400: TCTL (Transmit ctrl register) 0x3103F0FA Transmitter: enabled Pad short packets: enabled Software XOFF Transmission: disabled Re-transmit on late collision: enabled 0x03808: TDLEN (Transmit desc length) 0x00001000 0x03810: TDH (Transmit desc head) 0x00000000 0x03818: TDT (Transmit desc tail) 0x00000000 0x03820: TIDV (Transmit delay timer) 0x00000008 PHY type: IGP2 and ethtool -c eth0 Coalesce parameters for eth0: Adaptive RX: off TX: off stats-block-usecs: 0 sample-interval: 0 pkt-rate-low: 0 pkt-rate-high: 0 rx-usecs: 3 rx-frames: 0 rx-usecs-irq: 0 rx-frames-irq: 0 tx-usecs: 0 tx-frames: 0 tx-usecs-irq: 0 tx-frames-irq: 0 rx-usecs-low: 0 rx-frame-low: 0 tx-usecs-low: 0 tx-frame-low: 0 rx-usecs-high: 0 rx-frame-high: 0 tx-usecs-high: 0 tx-frame-high: 0 Here is also the lspci -vvv for this controller 02:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller Subsystem: Lenovo ThinkPad X60s Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 45 Region 0: Memory at ee000000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at 2000 [size=32] Capabilities: [c8] Power Management version 2 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME- Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 00000000fee0300c Data: 415a Capabilities: [e0] Express (v1) Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset- DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <128ns, L1 <64us ClockPM+ Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- AERCap: First Error Pointer: 14, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Device Serial Number 00-0a-e4-ff-ff-3e-ce-74 Kernel driver in use: e1000e Kernel modules: e1000e I filed a bug on this upstream, still no idea how to get more useful information. Here is a the result of the running that script EEPROM FIX UPDATE $ sudo bash fixeep-82573-dspd.sh eth0 eth0: is a "82573L Gigabit Ethernet Controller" This fixup is applicable to your hardware Your eeprom is up to date, no changes were made Do I still need to do anything? Also here is my EEPROM dump $ sudo ethtool -e eth0 Offset Values ------ ------ 0x0000 00 0a e4 3e ce 74 30 0b b2 ff 51 00 ff ff ff ff 0x0010 53 00 03 02 6b 02 7e 20 aa 17 9a 10 86 80 df 80 0x0020 00 00 00 20 54 7e 00 00 14 00 da 00 04 00 00 27 0x0030 c9 6c 50 31 3e 07 0b 04 8b 29 00 00 00 f0 02 0f 0x0040 08 10 00 00 04 0f ff 7f 01 4d ff ff ff ff ff ff 0x0050 14 00 1d 00 14 00 1d 00 af aa 1e 00 00 00 1d 00 0x0060 00 01 00 40 1f 12 07 40 ff ff ff ff ff ff ff ff 0x0070 ff ff ff ff ff ff ff ff ff ff ff ff ff ff 4a e0 I'd also like to note that I used eth0 every day for years and until recently never had an issue.

    Read the article

  • Spring Security - Persistent Remember Me Issue

    - by Taylor L
    I've been trying to track down why Spring Security isn't creating the Spring Security remember me cookie (SPRING_SECURITY_REMEMBER_ME_COOKIE). At first glance, the logs make it seem like the login is failing, but the login is actually successful in the sense that if I navigate to a page that requires authentication I am not redirected back to the login page. However, the logs appear to be saying the login credentials are invalid. I'm using Spring 3.0.1, Spring Security 3.0.1, and Google App Engine 1.3.1. Any ideas as to what is going on? Mar 16, 2010 10:05:56 AM org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices onLoginSuccess FINE: Creating new persistent login for user [email protected] Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices loginFail FINE: Interactive login attempt was unsuccessful. Mar 16, 2010 10:10:07 AM org.springframework.security.web.authentication.rememberme.AbstractRememberMeServices cancelCookie FINE: Cancelling cookie Below is the relevant portion of the applicationContext-security.xml. <http auto-config="false"> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/img/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/app/admin/**" filters="none" /> <intercept-url pattern="/app/login/**" filters="none" /> <intercept-url pattern="/app/register/**" filters="none" /> <intercept-url pattern="/app/error/**" filters="none" /> <intercept-url pattern="/" filters="none" /> <intercept-url pattern="/**" access="ROLE_USER" /> <logout logout-success-url="/" /> <form-login login-page="/app/login" default-target-url="/" authentication-failure-url="/app/login?login_error=1" /> <session-management invalid-session-url="/app/login" /> <remember-me services-ref="rememberMeServices" key="myKey" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="sha-256" base64="true"> <salt-source user-property="username" /> </password-encoder> </authentication-provider> </authentication-manager> <beans:bean id="userDetailsService" class="com.my.service.auth.UserDetailsServiceImpl" /> <beans:bean id="rememberMeServices" class="org.springframework.security.web.authentication.rememberme.PersistentTokenBasedRememberMeServices"> <beans:property name="userDetailsService" ref="userDetailsService" /> <beans:property name="tokenRepository" ref="persistentTokenRepository" /> <beans:property name="key" value="myKey" /> </beans:bean> <beans:bean id="persistentTokenRepository" class="com.my.service.auth.PersistentTokenRepositoryImpl" />

    Read the article

  • Jboss Seam: Enabling Debug page on WebLogic 10.3.2 (11g)

    - by Markos Fragkakis
    Hi all, SKIP TO UPDATE 3 I want to enable the Seam debug page on Weblogic 10.3.2 (11g). So, I have done the following: I have the jboss-seam and jboss-seam-debug jars as dependency in both my ejb and web maven projects (both are modules of my superproject) I put this context parameter in my web.xml: <context-param> <param-name>org.jboss.seam.core.init.debug</param-name> <param-value>true</param-value> </context-param> Now, when I hit the URL of my application, I get the debug page with this exception (full stacktrace at the end of the post): Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" From posts I read it seems that this is somehow related to two jars of jboss-seam or jboss-seam-debug being in the classpath. I opened my ear file and only one of each is present (in the ear) whereas the war itself has no libraries in the WEB-INF/lib. I have also read of another way to initialize debug page using the components.xml. I also tried to include the following components.xml in the WEB-INF, but it didn't work either: <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.2.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.2.xsd"> <core:init debug="true"/> </components> Any suggestions on what to do to enable the debug page correctly? Cheers! Full stacktrace: org.jboss.seam.contexts.PageContext.getPhaseId(PageContext.java:163) org.jboss.seam.contexts.PageContext.isBeforeInvokeApplicationPhase(PageContext.java:175) org.jboss.seam.contexts.PageContext.getCurrentWritableMap(PageContext.java:91) org.jboss.seam.contexts.PageContext.remove(PageContext.java:105) org.jboss.seam.Component.newInstance(Component.java:2141) org.jboss.seam.Component.getInstance(Component.java:2021) org.jboss.seam.Component.getInstance(Component.java:2000) org.jboss.seam.Component.getInstance(Component.java:1994) org.jboss.seam.Component.getInstance(Component.java:1967) org.jboss.seam.Component.getInstance(Component.java:1962) org.jboss.seam.faces.FacesPage.instance(FacesPage.java:92) org.jboss.seam.core.ConversationPropagation.restorePageContextConversationId(ConversationPropagation.java:84) org.jboss.seam.core.ConversationPropagation.restoreConversationId(ConversationPropagation.java:57) org.jboss.seam.jsf.SeamPhaseListener.afterRestoreView(SeamPhaseListener.java:391) org.jboss.seam.jsf.SeamPhaseListener.afterServletPhase(SeamPhaseListener.java:230) org.jboss.seam.jsf.SeamPhaseListener.afterPhase(SeamPhaseListener.java:196) com.sun.faces.lifecycle.Phase.handleAfterPhase(Phase.java:175) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:114) com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:104) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:265) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) weblogic.work.ExecuteThread.run(ExecuteThread.java:173) UPDATE 1: Now the debug page does not appear at all. When I ask for http://localhost/myapp/debug.xhtml I get a page with: myapp/debug.xhtml the same as any page that does not exist. I opened the .ear and the following jboss jars are in: jboss-seam-debug-2.2.0.GA.jar jboss-el-1.0_02.CR4.jar jboss-seam-2.2.0.GA.jar My current configuration: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org /2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <display-name>PRS 6.0</display-name> <session-config> <session-timeout>30</session-timeout> </session-config> <!-- The default behavior of JSF is to map the incoming request for a JSF view identifier (view ID for short) to a JSP file with the file extension .jsp. To get JSF to look for a Facelets template instead, we must register the .xhtml extension as the default suffix for JSF views --> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <context-param> <param-name>javax.faces.STATE_SAVING_METHOD</param-name> <param-value>server</param-value> </context-param> <context-param> <param-name>javax.faces.CONFIG_FILES</param-name> <param-value> /WEB-INF/faces-config/application.xml </param-value> </context-param> <context-param> <param-name>facelets.REFRESH_PERIOD</param-name> <param-value>2</param-value> </context-param> <context-param> <param-name>facelets.DEVELOPMENT</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>facelets.SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>com.sun.faces.verifyObjects</param-name> <param-value>false</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.VIEW_HANDLERS</param-name> <param-value>com.sun.facelets.FaceletViewHandler</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.COMPRESS_SCRIPT</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.COMPRESS_STYLE</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>org.ajax4jsf.xmlparser.ORDER</param-name> <param-value>NONE</param-value> </context-param> <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <context-param> <param-name>org.richfaces.CONTROL_SKINNING</param-name> <param-value>enable</param-value> </context-param> <context-param> <param-name>org.richfaces.LoadStyleStrategy</param-name> <param-value>ALL</param-value> </context-param> <context-param> <param-name>org.richfaces.LoadScriptStrategy</param-name> <param-value>ALL</param-value> </context-param> <!-- Seam Filter --> <!-- (MUST BE FIRST)--> <filter> <filter-name>Seam Filter</filter-name> <filter-class>org.jboss.seam.servlet.SeamFilter</filter-class> </filter> <filter-mapping> <filter-name>Seam Filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- RichFaces filter --> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> <init-param> <description>Set the size limit for uploaded files as attachments in bytes. (max 5MB)</description> <param-name>maxRequestSize</param-name> <param-value>5242880</param-value> </init-param> </filter> <filter-mapping> <filter-name>richfaces</filter-name> <servlet-name>Faces Servlet</servlet-name> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> </filter-mapping> <listener> <listener-class> XX.XXXX.XXX.prs.web.listeners.ResourceInitializationListener</listener-class> </listener> <listener> <listener-class>com.sun.faces.config.ConfigureListener</listener-class> </listener> <listener> <listener-class>XX.XXXX.XXX.prs.web.listeners.EJBInjectionListener</listener-class> </listener> <!-- Seam Listener--> <listener> <listener-class>org.jboss.seam.servlet.SeamListener</listener-class> </listener> <!-- Faces Servlet --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> -- Seam Resource Servlet-- org.jboss.seam.servlet.SeamResourceServlet-- -- -- Seam Resource Servlet-- /seam/resource/*-- -- <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> </web-app> faces.config <?xml version="1.0" encoding="UTF-8"?> <faces-config xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-facesconfig_1_2.xsd" version="1.2"> <application> <locale-config> <default-locale>en</default-locale> <supported-locale>en</supported-locale> </locale-config> <view-handler>com.sun.facelets.FaceletViewHandler</view-handler> <el-resolver>org.jboss.seam.el.SeamELResolver</el-resolver> <resource-bundle> <base-name>XX.XXXX.XXX.prs.web.messages.messages</base-name> <var>msgs</var> </resource-bundle> <resource-bundle> <base-name>XX.XXXX.XXX.prs.web.messages.validation</base-name> <var>val</var> </resource-bundle> </application> <lifecycle> <phase-listener>XX.XXXX.XXX.prs.web.listeners.SetFocusListener</phase-listener> </lifecycle> <!-- <lifecycle>--> <!-- <phase-listener>XX.XXXX.XXX.prs.web.listeners.DebugPhaseListener</phase-listener> --> <converter> <converter-for-class>XX.XXXX.XXX.prs.model.Applicant</converter-for-class> <converter-class> XX.XXXX.XXX.prs.web.common.converters.ApplicantConverter</converter-class> </converter> <validator> <validator-id>EmailValidator</validator-id> <validator-class>XX.XXXX.XXX.prs.web.common.validators.EmailValidator</validator-class> </validator> </faces-config> components.xml <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.2.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.2.xsd"> <core:init debug="true" /> <core:manager concurrent-request-timeout="500" conversation-timeout="1200000" conversation-id-parameter="cid" parent-conversation-id-parameter="pid" /> UPDATE 2: These guys here have the same problem. They have an outer EAR project containing the inner WAR project. They discuss that this may be related to how jars end up in the projects. I use Maven, and I had set it to create "Skinny Wars", that is excluding all jar dependencies from the inner WAR project, so that it remains small in size. All its dependencies are contained in the EAR and are used by all other modules. I changed the settings of the maven-war-plugin to leave inside the war the web-specific jars (the ones mentioned in the link, like RichFaces, jboss-seam-debug, Facelets etc). However, the problem has reverted to its previous form. I now get the debug page, whatever link I press, with the initial exception. Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" UPDATE 3: The structure of the application .ear is the following: application.ear |--> APP-INF | |--> classes |--> lib (all jar dependencies go here, including the WAR dependencies, EJB module dependencies) |-->META_INF | |--> application.xml | |--> data-sources.xml | |--> MANIFEST.MF | |--> weblogic.xml | |--> weblogic-application.xml |--> jboss-seam-2.2.0.GA.jar |--> myEjbModule1.jar |--> myEjbModule2.jar |--> myEjbModule3.jar |--> myEjbModule4.jar |--> myWar.war (NO libraries in WEB-INF/lib, finds everything in EAR/lib) When deploying .ear application WITHOUT debug enabled (not including the jboss-seam-debug.jar in the ear), the application is loaded correctly. When deploying WITH jboss-seam-debug.jar in the EAR (EAR/lib directory), the application does not appear, but ONLY the debug page with the following exception (stacktrace at the end): Exception during request processing: Caused by java.lang.IllegalStateException with message: "No phase id bound to current thread (make sure you do not have two SeamPhaseListener instances installed)" When "JBoss-izing" the same EAR (remove hibernate jars, which are provided by JBoss, and move all libraries from EAR/lib to EAR root), the application loads correctly. BOTH the application AND the debug page appear correctly. Full stacktrace: org.jboss.seam.contexts.PageContext.getPhaseId(PageContext.java:163) org.jboss.seam.contexts.PageContext.isBeforeInvokeApplicationPhase(PageContext.java:175) org.jboss.seam.contexts.PageContext.getCurrentWritableMap(PageContext.java:91) org.jboss.seam.contexts.PageContext.remove(PageContext.java:105) org.jboss.seam.Component.newInstance(Component.java:2141) org.jboss.seam.Component.getInstance(Component.java:2021) org.jboss.seam.Component.getInstance(Component.java:2000) org.jboss.seam.Component.getInstance(Component.java:1994) org.jboss.seam.Component.getInstance(Component.java:1967) org.jboss.seam.Component.getInstance(Component.java:1962) org.jboss.seam.faces.FacesPage.instance(FacesPage.java:92) org.jboss.seam.core.ConversationPropagation.restorePageContextConversationId(ConversationPropagation.java:84) org.jboss.seam.core.ConversationPropagation.restoreConversationId(ConversationPropagation.java:57) org.jboss.seam.jsf.SeamPhaseListener.afterRestoreView(SeamPhaseListener.java:391) org.jboss.seam.jsf.SeamPhaseListener.afterServletPhase(SeamPhaseListener.java:230) org.jboss.seam.jsf.SeamPhaseListener.afterPhase(SeamPhaseListener.java:196) com.sun.faces.lifecycle.Phase.handleAfterPhase(Phase.java:175) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:114) com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:104) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:265) weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:530) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

    Read the article

  • problem with NHibernate and iSeries DB2

    - by chrisjlong
    Ok So I have an AS400/iSeries running v5r4. I have an application that was using classic NHibernate to connect and do some basic crud. Now I have pulled that app (which sat for 2 years) off the shelf of TFS and onto a new PC and cannot seem to get it running. Here is my Hibernate Config: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider"> NHibernate.Connection.DriverConnectionProvider </property> <property name="dialect"> NHibernate.Dialect.DB2400Dialect </property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property> <property name="connection.connection_string"> DataSource=207.206.106.19; Database=AS400; userID=XXXXXX; Password=XXXXXXX; LibraryList=FMSFILTST,BEFFILT,HRDBFT,HRCSTFT,J20##X2DEV,GLCUSTDEV,OSL@@F3DEV; Naming=System; Initial Catalog=*SYSBAS; </property> <property name="use_outer_join">true</property> <property name="query.substitutions"> true 1, false 0, yes 'Y', no 'N' </property> <property name="show_sql">false</property> <mapping assembly="BusinessLogic" /> </session-factory> </hibernate-configuration> I have all the proper DLL's included (NHibernate, castle, iesi, antlr3 , log4 etc). Also have this line in my web.config <runtime> <assemblyBinding> <qualifyAssembly partialName="IBM.Data.DB2.iSeries" fullName="IBM.Data.DB2.iSeries,Version=10.0.0.0,PublicKeyToken=9CDB2EBFB1F93A26,Culture=neutral"/> </assemblyBinding> </runtime> Yet I am still getting the following error as soon as I call NHibernate.Cfg.Configuration().Configure().BuildSessionFactory().OpenSession(); The error is as follows Unable to cast object of type 'IBM.Data.DB2.iSeries.iDB2Connection' to type 'System.Data.Common.DbCommand' I am dying to get some help with this. Any assistance is appreciated. Thanks!

    Read the article

  • WIX installer with Custom Actions: "built by a runtime newer than the currently loaded runtime and cannot be loaded."

    - by Rimer
    I have a WIX installer that executes Custom Actions over the course of install. When I run the WIX installer, and it encounters its first Custom Action, the installer fails out, and I receive an error in the MSI log as follows: Action start 12:03:53: LoadBCAConfigDefaults. SFXCA: Extracting custom action to temporary directory: C:\DOCUME~1\ELOY06~1\LOCALS~1\Temp\MSI10C.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action WIXCustomActions!WIXCustomActions.CustomActions.LoadBCAConfigDefaults Error: could not load custom action class WIXCustomActions.CustomActions from assembly: WIXCustomActions System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. File name: 'WIXCustomActions' at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.AppDomain.Load(String assemblyString) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.GetCustomActionMethod(Session session, String assemblyName, String className, String methodName) ... the specific problem from above is "System.BadImageFormatException: Could not load file or assembly 'WIXCustomActions' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded." The wordage of that error seems to indicate something like an incorrectly referenced .NET framework or something (I'm targeting 3.5 in both my custom actions and its dependencies), but I can't figure out where to make a change to address this problem. Any ideas? .... Not sure if this will help but it's the CustomActions package batch file I run to create the .dll package containing the custom action functions: =============== call "C:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" @echo on cd "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions" csc /target:library /r:"C:\program files\windows installer xml v3.6\sdk\microsoft.deployment.windowsinstaller.dll" /r:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\eLoyalty.PortalLib.dll" /out:"C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" CustomActions.cs cd "C:\Program Files\Windows Installer XML v3.6\SDK" makesfxca "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\BatchCustomerAnalysisWIXCustomActionsPackage.dll" "c:\program files\windows installer xml v3.6\sdk\x86\sfxca.dll" "C:\development\trunk\PortalsDev\csharp\production\Installers\WIX\customactions\PAServicesWIXCustomActions\bin\Debug\WIXCustomActions.dll" customaction.config Microsoft.Deployment.WindowsInstaller.dll

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >