Search Results

Search found 10046 results on 402 pages for 'repository pattern'.

Page 240/402 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Rebuilding CoasterBuzz, Part III: The architecture using the "Web stack of love"

    - by Jeff
    This is the third post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch in the next month or two. More: Part I: Evolution, and death to WCF Part II: Hot data objects I finally hit a point in the re-do of CoasterBuzz where I feel like the major pieces are in place... rewritten, ported and what not, so that I can focus now on front-end design and more interesting creative problems. I've been asked on more than one occasion (OK, just twice) what's going on under the covers, so I figure this might be a good time to explain the overall architecture. As it turns out, I'm using a whole lof of the "Web stack of love," as Scott Hanselman likes to refer to it. Oh that Hanselman. First off, at the center of it all, is BizTalk. Just kidding. That's "enterprise architecture" humor, where every discussion starts with how they'll use BizTalk. Here are the bigger moving parts: It's fairly straight forward. A common library lives in a number of Web apps, all of which are (or will be) powered by ASP.NET MVC 4. They all talk to the same database. There is the main Web site, which also has the endpoint for the Silverlight-based Feed app. The cstr.bz site handles redirects, which are generated when news items are published and sent to Twitter. Facebook publishing is handled via the RSS Graffiti Facebook app. The API site handles requests from the Windows Phone app. The main site depends very heavily on POP Forums, the open source, MVC-based forum I maintain. It serves a number of functions, primarily handling users. These user objects serve in non-forum roles to handle things like news and database contributions, maintaining track records (coaster nerd for "list of rides I've been on") and, perhaps most importantly, paid club memberships. Before I get into more specifics, note that the "glue" for everything is Ninject, the dependency injection framework. I actually prefer StructureMap these days, but I started with Ninject in POP Forums a long time ago. POP Forums has a static class, PopForumsActivation, that new's up an instance of the container, and you can call it from where ever. The downside is that the forums require Ninject in your MVC app as the default dependency resolver. At some point, I'll decouple it, but for now it's not in the way. In the general sense, the entire set of apps follow a repository-service-controller-view pattern. Repos just do data access, service classes do business logic, controllers compose and route, views view. The forum also provides Scoring Game functionality. The Scoring Game is a reasonably abstract framework to award users points based on certain actions, and then award achievements when a certain number of point events happen. For example, the forum already awards a point when someone plus-one's a post you made. You can set up an achievement that says, "Give the user an award when they've had 100 posts plus'd." It also does zero-point entries into the ledger, so if you make a post, you could award an achievement based on 100 posts made. Wiring in the scoring game to CoasterBuzz functionality is just a matter of going to the Ninject container and getting an instance of the event publisher, and passing it events. Forum adapters were introduced into POP Forums a few versions ago, and they can intercept the model generated for forum topic lists and threads and designate an alternate view. These are used to make the "Day in Pictures" forum, where users can upload photos as frame-by-frame photo threads. Another adapter adds an association UI, so users can associate specific amusement parks with their trip report posts. The Silverlight-based Feed app talks to a simple JSON endpoint in the main app. This uses an underlying library I wrote ages ago, simply called Feeds, that aggregates event information. You inherit from a base class that creates instances of a publisher interface, and then use that class to send it an event type and any number of data fields. Feeds has two publishers: One is to the database, and that's used for the endpoint that talks to the Silverlight app. The second publisher publishes to Twitter, if the event is of the type "news." The wiring is a little strange, because for the new posts and topics events, I'm actually pulling out the forum repository classes from the Ninject container and replacing them with overridden methods to publish. I should probably be doing this at the service class level, but whatever. It's my mess. cstr.bz doesn't do anything interesting. It looks up the path, and if it has a match, does a 301 redirect to the long URL. The API site just serves up JSON for the Windows Phone app. The Windows Phone app is Silverlight, of course, and there isn't much to it. It does use the control toolkit, but beyond that, it relies on a simple class that creates a Webclient and calls the server for JSON to deserialize. The same class is now used by the Feed app, which used to use WCF. Simple is better. Data access in POP Forums is all straight SQL, because a lot of it was ported from the ASP.NET version. Most CoasterBuzz data access is handled by the Entity Framework, using the code-first model. The context class in this case does a lot of work to make sure that the table and key mapping works, since much of it breaks from the normal conventions of EF. One of the more powerful things you can do with EF, once you understand the little gotchas, is split tables by row into different entities. For example, a roller coaster photo has everything in the same row, including the metadata, the thumbnail bytes and the image itself. Obviously, if you want to get a list of photos to iterate over in a view, you don't want to get the image data. The use of navigation properties makes it easier to get just what you want. The front end includes Razor views in MVC, and jQuery is used for client-side goodness. I'm also using jQuery UI in a few places, for tabs, a dialog box and autocomplete. I'm also, tentatively, using jQuery Mobile. I've already ported most forum views to Mobile, but they need some work as v1.1 isn't finished yet. I'm not sure if I'll ship CoasterBuzz with mobile views or not yet. It's on the radar, but not something in my delivery criteria. That covers all of the big frameworks in play. Next time I hope to talk more about the front-end experience, which to me is where most of the fun is these days. Hoping to launch in the next month or two. Getting tired of looking at the old site!

    Read the article

  • Agile Testing Days 2012 – Day 3 – Agile or agile?

    - by Chris George
    Another early start for my last Lean Coffee of the conference, and again it was not wasted. We had some really interesting discussions around how to determine what test automation is useful, if agile is not faster, why do it? and a rather existential discussion on whether unicorns exist! First keynote of the day was entitled “Fast Feedback Teams” by Ola Ellnestam. Again this relates nicely to the releasing faster talk on day 2, and something that we are looking at and some teams are actively trying. Introducing the notion of feedback, Ola describes a game he wrote for his eldest child. It was a simple game where every time he clicked a button, it displayed “You’ve Won!”. He then changed it to be a Win-Lose-Win-Lose pattern and watched the feedback from his son who then twigged the pattern and got his younger brother to play, alternating turns… genius! (must do that with my children). The idea behind this was that you need that feedback loop to learn and progress. If you are not getting the feedback you need to close that loop. An interesting point Ola made was to solve problems BEFORE writing software. It may be that you don’t have to write anything at all, perhaps it’s a communication/training issue? Perhaps the problem can be solved another way. Writing software, although it’s the business we are in, is expensive, and this should be taken into account. He again mentions frequent releases, and how they should be made as soon as stuff is ready to be released, don’t leave stuff on the shelf cause it’s not earning you anything, money or data. I totally agree with this and it’s something that we will be aiming for moving forwards. “Exceptions, Assumptions and Ambiguity: Finding the truth behind the story” by David Evans started off very promising by making references to ‘Grim up North’ referring to the north of England. Not sure it was appreciated by most of the audience, but it made me laugh! David explained how there are always risks associated with exceptions, giving the example of a one-way road near where he lives, with an exception sign giving rights to coaches to go the wrong way. Therefore you could merrily swing around the corner of the one way road straight into a coach! David showed the danger in making assumptions with lyrical quotes from Lola by The Kinks “I’m glad I’m a man, and so is Lola” and with a picture of a toilet flush that needed instructions to operate the full and half flush. With this particular flush, you pulled the handle all the way down to half flush, and half way down to full flush! hmmm, a bit of a crappy user experience methinks! Then through a clever use of a passage from the Jabberwocky, David then went onto show how mis-translation/ambiguity is the can completely distort the original meaning of something, and this is a real enemy of software development. This was all helping to demonstrate that the term Story is often heavily overloaded in the Agile world, and should really be stripped back to what it is really for, stating a business problem, and offering a technical solution. Therefore a story could be worded as “In order to {make some improvement}, we will { do something}”. The first ‘in order to’ statement is stakeholder neutral, and states the problem through requesting an improvement to the software/process etc. The second part of the story is the verb, the doing bit. So to achieve the ‘improvement’ which is not currently true, we will do something to make this true in the future. My PM is very interested in this, and he’s observed some of the problems of overloading stories so I’m hoping between us we can use some of David’s suggestions to help clarify our stories better. The second keynote of the day (and our last) proved to be the most entertaining and exhausting of the conference for me. “The ongoing evolution of testing in agile development” by Scott Barber. I’ve never had the pleasure of seeing Scott before… OMG I would love to have even half of the energy he has! What struck me during this presentation was Scott’s explanation of how testing has become the role/job that it is (largely) today, and how this has led to the need for ‘methodologies’ to make dev and test work! The argument that we should be trying to converge the roles again is a very valid one, and one that a couple of the teams at work are actively doing with great results. Making developers as responsible for quality as testers is something that has been lost over the years, but something that we are now striving to achieve. The idea that we (testers) should be testing experts/specialists, not testing ‘union members’, supports this idea so the entire team works on all aspects of a feature/product, with the ‘specialists’ taking the lead and advising/coaching the others. This leads to better propagation of information around the team, a greater holistic understanding of the project and it allows the team to continue functioning if some of it’s members are off sick, for example. Feeling somewhat drained from Scott’s keynote (but at the same time excited that alot of the points he raised supported actions we are taking at work), I headed into my last presentation for Agile Testing Days 2012 before having to make my way to Tegel to catch the flight home. “Thinking and working agile in an unbending world” with Pete Walen was a talk I was not going to miss! Having spoken to Pete several times during the past few days, I was looking forward to hearing what he was going to say, and I was not disappointed. Pete started off by trying to separate the definitions of ‘Agile’ as in the methodology, and ‘agile’ as in the adjective by pronouncing them the ‘english’ and ‘american’ ways. So Agile pronounced (Ajyle) and agile pronounced (ajul). There was much confusion around what the hell he was talking about, although I thought it was quite clear. Agile – Software development methodology agile – Marked by ready ability to move with quick easy grace; Having a quick resourceful and adaptable character. Anyway, that aside (although it provided a few laughs during the presentation), the point was that many teams that claim to be ‘Agile’ but are not, in fact, ‘agile’ by nature. Implementing ‘Agile’ methodologies that are so prescriptive actually goes against the very nature of Agile development where a team should anticipate, adapt and explore. Pete made a valid point that very few companies intentionally put up roadblocks to impede work, so if work is being blocked/delayed, why? This is where being agile as a team pays off because the team can inspect what’s going on, explore options and adapt their processes. It is through experimentation (and that means trying and failing as well as trying and succeeding) that a team will improve and grow leading to focussing on what really needs to be done to achieve X. So, that was it, the last talk of our conference. I was gutted that we had to miss the closing keynote from Matt Heusser, as Matt was another person I had spoken too a few times during the conference, but the flight would not wait, and just as well we left when we did because the traffic was a nightmare! My Takeaway Triple from Day 3: Release often and release small – don’t leave stuff on the shelf Keep the meaning of the word ‘agile’ in mind when working in ‘Agile Look at testing as more of a skill than a role  

    Read the article

  • Spacewalk 2.0 provided to manage Oracle Linux systems

    - by wcoekaer
    Oracle Linux customers have a few options to manage and provision their servers. We provide a license to use Oracle Enterprise Manager's Linux OS management, monitoring and provisioning features without additional cost for every server that has an Oracle Linux support subscription. So there is no additional pack to license and no additional per server cost, it's all included in our Basic, Premier and Systems support subscriptions. The nice thing with Oracle Enterprise Manager is that you end up with a single management product that can manage all aspects of your software stack. You have complete insight into the applications running, you have roles and responsibilities, you have third party connectors for storage or other products and it makes it very easy and convenient to correlate data and events when something happens. If you use Oracle VM as well, you end up with a complete cloud portal with selfservice, chargeback, etc... Another, much simpler option, is just using yum. It is very easy to take a server and create directories and expose these through apache as repositories. You can have a simple yum config on each server pointing to a few specific repositories. It requires some manual effort in terms of creating directories, downloading packages and creating local repo files but it's easy to do and for many people a preferred solution. There are also a good number of customers that just connect their servers directly to ULN or to our free update server public-yum. Just to re-iterate, our public-yum servers have all the errata and updates available for free. Now we added another option. Many of our customers have switched from a competing Linux vendor and they had familiarity with their management tools. Switching to Oracle for support is very easy since we don't require changes to the installed servers but we also want to make sure there is a very easy and almost transparent switch for the management tools as well. While Oracle Enterprise Manager is our preferred way of managing systems, we now are offering Spacewalk 2.0 to our customers. The community project can be found here. We have made a few changes to ensure easy and complete support for Oracle Linux, tested it with public-yum, etc.. You can find the rpms in our public-yum repos at http://public-yum.oracle.com/repo/OracleLinux/OL6/. There are repositories for spacewalk server and then for each version (OL5,OL6) and architecture (x86 and x86-64) we have the client repositories as well. Spacewalk itself is only made available for OL6 x86-64. Documentation can be found here. I set it up myself and here are some quick steps on how you can get going in just a matter of minutes: Spacewalk Server Installation : 1) Installing an Oracle Database Use an existing Oracle Database or install a new Oracle Database (Standard or Enterprise Edition) [at this time use 11g, we will add support for 12c in the near future]. This database can be installed on the spacewalk server or on a separate remote server. While Oracle XE might work to create a small sample POC, we do not support the use of Oracle XE, spacewalk repositories can become large and create a significant database workload. Customers can use their existing database licenses, they can download the database with a trial licence from http://edelivery.oracle.com or Oracle Linux subscribers (customers) will be allowed to use the Oracle Database as a spacewalk repository as part of their Oracle Linux subscription at no additional cost. |NOTE : spacewalk requires the database to be configured with the UTF8 characterset. |Installation will fail if your database does not use UTF8. |To verify if your database is configured correctly, run the following command in sqlplus: | |select value from nls_database_parameters where parameter='NLS_CHARACTERSET'; |This should return 'AL32UTF8' 2) Configure the database schema for spacewalk Ideally, create a tablespace in the database to hold the spacewalk schema tables/data; create tablespace spacewalk datafile '/u01/app/oracle/oradata/orcl/spacewalk.dbf' size 10G autoextend on; Create the database user spacewalk (or use some other schema name) in sqlplus. example : create user spacewalk identified by spacewalk; grant connect, resource to spacewalk; grant create table, create trigger, create synonym, create view, alter session to spacewalk; grant unlimited tablespace to spacewalk; alter user spacewalk default tablespace spacewalk; 4) Spacewalk installation and configuration Spacewalk server requires an Oracle Linux 6 x86-64 system. Clients can be Oracle Linux 5 or 6, both 32- and 64bit. The server is only supported on OL6/64bit. The easiest way to get started is to do a 'Minimal' install of Oracle Linux on a server and configure the yum repository to include the spacewalk repo from public-yum. Once you have a system with a minimal install, modify your yum repo to include the spacewalk repo. Example : edit /etc/yum.repos.d/public-yum-ol.repo and add the following lines at the end of the file : [spacewalk] name=spacewalk baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Install the following pre-requisite packages on your spacewalk server : oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 The above RPMs can be found on the Oracle Technology Network website : http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html As the root user, configure the library path to include the Oracle Instant Client libraries : cd /etc/ld.so.conf.d echo /usr/lib/oracle/11.2/client64/lib oracle-instantclient11.2.conf ldconfig Install spacewalk : # yum install spacewalk-oracle The above yum command should download and install all required packages to run spacewalk on your local server. | NOTE : if you did a full, desktop or workstation installation, | you have to remove the JTA package | BEFORE installing spacewalk-oracle (rpm -e --nodeps jta) Once the installation completes, simply run the spacewalk configuration tool and you are all set. (make sure to run the command with the 2 arguments) spacewalk-setup --disconnected --external-db Answer the questions during the setup, ensure you provide the current database user (example : spacewalk) and password (example : spacewalk) and database server hostname (the standard hostname of the server on which you have deployed the Oracle database) At the end of the setup script, your spacewalk server should be fully configured and you can log into the web portal. Use your favorite browser to connect to the website : http://[spacewalkserverhostname] The very first action will be to create the main admin account.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • Oracle Certification and virtualization Solutions.

    - by scoter
    As stated in official MOS ( My Oracle Support ) document 249212.1 support for Oracle products on non-Oracle VM platforms follow exactly the same stance as support for VMware and, so, the only x86 virtualization software solution certified for any Oracle product is "Oracle VM". Based on the fact that: Oracle VM is totally free ( you have the option to buy Oracle-Support ) Certified is pretty different from supported ( OracleVM is certified, others could be supported ) With Oracle VM you may not require to reproduce your issue(s) on physical server Oracle VM is the only x86 software solution that allows hard-partitioning *** *** see details to these Oracle public links: http://www.oracle.com/technetwork/server-storage/vm/ovm-hardpart-168217.pdf http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf people started asking to migrate from third party virtualization software (ex. RH KVM, VMWare) to Oracle VM. Migrating RH KVM guest to Oracle VM. OracleVM has a built-in P2V utility ( Official Documentation ) but in some cases we can't use it, due to : network inaccessibility between hypervisors ( KVM and OVM ) network slowness between hypervisors (KVM and OVM) size of the guest virtual-disks Here you'll find a step-by-step guide to "manually" migrate a guest machine from KVM to OVM. 1. Verify source guest characteristics. Using KVM web console you can verify characteristics of the guest you need to migrate, such as: CPU Cores details Defined Memory ( RAM ) Name of your guest Guest operating system Disks details ( number and size ) Network details ( number of NICs and network configuration ) 2. Export your guest in OVF / OVA format.  The export from Redhat KVM ( kernel virtual machine ) will create a structured export of your guest: [root@ovmserver1 mnt]# lltotal 12drwxrwx--- 5 36 36 4096 Oct 19 2012 b8296fca-13c4-4841-a50f-773b5139fcee b8296fca-13c4-4841-a50f-773b5139fcee is the ID of the guest exported from RH-KVM [root@ovmserver1 mnt]# cd b8296fca-13c4-4841-a50f-773b5139fcee/[root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# ls -ltrtotal 12drwxr-x--- 4 36 36 4096 Oct 19  2012 masterdrwxrwx--- 2 36 36 4096 Oct 29  2012 dom_mddrwxrwx--- 4 36 36 4096 Oct 31  2012 images images contains your virtual-disks exported [root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# cd images/[root@ovmserver1 images]# ls -ltratotal 16drwxrwx--- 5 36 36 4096 Oct 19  2012 ..drwxrwx--- 2 36 36 4096 Oct 31  2012 d4ef928d-6dc6-4743-b20d-568b424728a5drwxrwx--- 2 36 36 4096 Oct 31  2012 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1drwxrwx--- 4 36 36 4096 Oct 31  2012 .[root@ovmserver1 images]# cd d4ef928d-6dc6-4743-b20d-568b424728a5/[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# ls -ltotal 5169092-rwxr----- 1 36 36 187904819200 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1-rw-rw---- 1 36 36          341 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1.meta[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# file 4c03b1cf-67cc-4af0-ad1e-529fd665dac14c03b1cf-67cc-4af0-ad1e-529fd665dac1: LVM2 (Linux Logical Volume Manager) , UUID: sZL1Ttpy0vNqykaPahEo3hK3lGhwspv 4c03b1cf-67cc-4af0-ad1e-529fd665dac1 is the first exported disk ( physical volume ) [root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# cd ../4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# ls -ltotal 5568076-rwxr----- 1 36 36 107374182400 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a-rw-rw---- 1 36 36          341 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a.meta[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# file 9020f2e1-7b8a-4641-8f80-749768cc237a9020f2e1-7b8a-4641-8f80-749768cc237a: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; partition 3: ID=0x83, starthead 254, startsector 65930760, 8385930 sectors; partition 4: ID=0x5, starthead 254, startsector 74316690, 135395820 sectors, code offset 0x48 9020f2e1-7b8a-4641-8f80-749768cc237a is the second exported disk, with partition 1 bootable 3. Prepare the new guest on Oracle VM. By Ovm-Manager we can prepare the guest where we will move the exported virtual-disks; under the Tab "Servers and VMs": click on  and create your guest with parameters collected before (point 1): - add NICs on different networks: - add virtual-disks; in this case we add two disks of 1.0 GB each one; we will extend the virtual disk copying the source KVM virtual-disk ( see next steps ) - verify virtual-disks created ( under Repositories tab ) 4. Verify OVM virtual-disks names. [root@ovmserver1 VirtualMachines]# grep -r hyptest_rdbms * 0004fb0000060000a906b423f44da98e/vm.cfg:OVM_simple_name = 'hyptest_rdbms' [root@ovmserver1 VirtualMachines]# cd 0004fb0000060000a906b423f44da98e [root@ovmserver1 0004fb0000060000a906b423f44da98e]# more vm.cfgvif = ['mac=00:21:f6:0f:3f:85,bridge=0004fb001089128', 'mac=00:21:f6:0f:3f:8e,bridge=0004fb00101971d'] OVM_simple_name = 'hyptest_rdbms' vnclisten = '127.0.0.1' disk = ['file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/ VirtualDisks/0004fb000012000097c1bfea9834b17d.img,xvda,w', 'file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img,xvdb,w'] vncunused = '1' uuid = '0004fb00-0006-0000-a906-b423f44da98e' on_reboot = 'restart' cpu_weight = 27500 memory = 32768 cpu_cap = 0 maxvcpus = 8 OVM_high_availability = True maxmem = 32768 vnc = '1' OVM_description = '' on_poweroff = 'destroy' on_crash = 'restart' name = '0004fb0000060000a906b423f44da98e' guest_os_type = 'linux' builder = 'hvm' vcpus = 8 keymap = 'en-us' OVM_os_type = 'Oracle Linux 5' OVM_cpu_compat_group = '' OVM_domain_type = 'xen_hvm' disk2 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img disk1 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img Summarizing disk1 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a disk1 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img disk2 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 disk2 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img 5. Copy KVM exported virtual-disks to OVM virtual-disks. Keeping your Oracle VM guest stopped you can copy KVM exported virtual-disks to OVM virtual-disks; what I did is only to locally mount the filesystem containing the exported virtual-disk ( by an usb device ) on my OVS; the copy automatically resize OVM virtual-disks ( previously created with a size of 1GB ) . nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb000012000097c1bfea9834b17d.img & nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb0000120000cde6a11c3cb1d0be.img & 7. When copy completed refresh repository to aknowledge the new-disks size. 7. After "refresh repository" is completed, start guest machine by Oracle VM manager. After the first start of your guest: - verify that you can see all disks and partitions - verify that your guest is network reachable ( MAC Address of your NICs changed ) Eventually you can also evaluate to convert your guest to PVM ( Paravirtualized virtual Machine ) following official Oracle documentation. Ciao Simon COTER ps: next-time I'd like to post an article reporting how to manually migrate Virtual-Iron guests to OracleVM.  Comments and corrections are welcome. 

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    ????????????????????????????????????????·???????????Java EE 6???????????????·????WebLogic Server 12c?(???)?????????Oracle Enterprise Pack for Eclipse 12c?????Java EE 6??????3???????????????????????JSF 2.0?????????????????????????JAX-RS????RESTful?Web???????????????(???)?????????????JSF 2.0???????????????? Java EE 6??????????????????????????????????????JSF(JavaServer Faces) 2.0??????????Java EE?????????????????????????????????Struts????????????????????????????????JSF 2.0?Java EE 6??????????????????????????????????????????????????JSP(JavaServer Pages)?JSF???????????????????????·???????????????????????Web???????????????????????????????????????????????????????????????????????????????? ???????????????????????????????EJB??????????????EMPLOYEES??????????????????????XHTML????????????????????????????????????????????????????????????ManagedBean????????????JSF 2.0????????????????????? ?????????Oracle Enterprise Pack for Eclipse(OEPE)?????????????????Eclipse(OEPE)???????·?????OOW?????????????????·???????????Properties?????????????????·???·????????????????????????????Project Facets????????????JavaServer Faces?????????????Apply?????????OK???????????? ???JSF????????????????????????????ManagedBean???IndexBean?????????????OOW??????????????????·???????????????NEW?-?Class??????New Java Class??????????????????????Package????managed???Name????IndexBean???????Finish???????????? ?????IndexBean??????·????????????????????????????????????????????IndexBean(IndexBean.java)?package managed;import java.util.ArrayList;import java.util.List;import javax.ejb.EJB;import javax.faces.bean.ManagedBean;import ejb.EmpLogic;import model.Employee;@ManagedBeanpublic class IndexBean {  @EJB  private EmpLogic empLogic;  private String keyword;  private List<Employee> results = new ArrayList<Employee>();  public String getKeyword() {    return keyword;  }  public void setKeyword(String keyword) {    this.keyword = keyword;  }  public List getResults() {    return results;  }  public void actionSearch() {    results.clear();    results.addAll(empLogic.getEmp(keyword));  }} ????????????????keyword?results??????????????????????????????Session Bean???EmpLogic?????????????????@EJB?????????????????????????????????????????????????????????????????????actionSearch??????????????EmpLogic?????????·????????????????????result???????? ???ManagedBean?????????????????????????????????????????·??????OOW??????????????WebContent???????index.xhtml????? ???????????index.xhtml????????????????????????????????????????????????(Index.xhtml)?<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"   "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"  xmlns:ui="http://java.sun.com/jsf/facelets"  xmlns:h="http://java.sun.com/jsf/html"  xmlns:f="http://java.sun.com/jsf/core"><h:head>  <title>Employee??????</title></h:head><h:body>  <h:form>    <h:inputText value="#{indexBean.keyword}" />    <h:commandButton action="#{indexBean.actionSearch}" value="??" />    <h:dataTable value="#{indexBean.results}" var="emp" border="1">      <h:column>        <f:facet name="header">          <h:outputText value="employeeId" />        </f:facet>        <h:outputText value="#{emp.employeeId}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="firstName" />        </f:facet>        <h:outputText value="#{emp.firstName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="lastName" />        </f:facet>        <h:outputText value="#{emp.lastName}" />      </h:column>      <h:column>        <f:facet name="header">          <h:outputText value="salary" />        </f:facet>        <h:outputText value="#{emp.salary}" />      </h:column>    </h:dataTable>  </h:form></h:body></html> index.xhtml???????????????????ManagedBean???IndexBean??????????????????????????????IndexBean?????actionSearch??????????h:commandButton???????????????????????????????????????? ???Web???????????????(web.xml)??????web.xml???????·?????OOW???????????WebContent?-?WEB-INF?????? ?????????????web-app??????????????welcome-file-list(????)?????????????Web???????????????(web.xml)?<?xml version="1.0" encoding="UTF-8"?><web-app xmlns:javaee="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="3.0">  <javaee:display-name>OOW</javaee:display-name>  <servlet>    <servlet-name>Faces Servlet</servlet-name>    <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>    <load-on-startup>1</load-on-startup>  </servlet>  <servlet-mapping>    <servlet-name>Faces Servlet</servlet-name>    <url-pattern>/faces/*</url-pattern>  </servlet-mapping>  <welcome-file-list>    <welcome-file>/faces/index.xhtml</welcome-file>  </welcome-file-list></web-app> ???JSF????????????????????????????? ??????Java EE 6?JPA 2.0?EJB 3.1?JSF 2.0????????????????????????????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????????????????????????????????????????????Oracle WebLogic Server 12c(12.1.1)??????Next??????????????? ?????????????????????Domain Directory??????Browse????????????????????????C:\Oracle\Middleware\user_projects\domains\base_domain??????Finish???????????? ?????WebLogic Server?????????????????????????????????????????????????????????????????????OEPE??Servers???????Oracle WebLogic Server 12c???????????·???????????????Properties??????????????????????????????WebLogic?-?Publishing????????????Publish as an exploded archive??????????????????OK???????????? ???????????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????? ???????????????????????????????????????????????·??????????????????????????????????????????firstName?????????????????JAX-RS???RESTful?Web??????? ?????????JAX-RS????RESTful?Web??????????????? Java EE??????????Java EE 5???SOAP????Web??????????JAX-WS??????????Java EE 6????????JAX-RS?????????????RESTful?Web????????????·????????????????????????JAX-RS????????Session Bean??????·?????????Web???????????????????????????????????????????????JAX-RS?????????? ?????????????????????????????JAX-RS???RESTful Web??????????????????????????·?????OOW???????????·???????????????Properties???????????????????????????Project Facets?????????????JAX-RS(Rest Web Services)???????????Further configuration required?????????????Modify Faceted Project???????????????JAX-RS??????·?????????????????JAX-RS Implementation Library??????Manage libraries????(???????????)?????????????? ??????Preference(Filtered)???????????????New????????????????New User Library????????????????User library name????JAX-RS???????OK???????????????????Preference(Filtered)?????????????Add JARs????????????????????????C:\Oracle\Middleware\modules \com.sun.jersey.core_1.1.0.0_1-9.jar??????OK???????????? ???Modify Faceted Project??????????JAX-RS Implementation Library????JAX-RS????????????????????JAX-RS servlet class name????com.sun.jersey.spi.container.servlet.ServletContainer???????OK?????????????Project Facets???????????????????OK?????????????????? ???RESTful Web??????????????????????????????????(???????EmpLogic?????????????)??RESTful Web?????????????EmpLogic(EmpLogic.java)?package ejb; import java.util.List; import javax.ejb.LocalBean; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.ws.rs.GET;import javax.ws.rs.Path;import javax.ws.rs.PathParam;import javax.ws.rs.Produces;import model.Employee; @Stateless @LocalBean @Path("/emprest")public class EmpLogic {     @PersistenceContext(unitName = "OOW")     private EntityManager em;     public EmpLogic() {     }  @GET  @Path("/getname/{empno}")  // ?  @Produces("text/plain")  // ?  public String getEmpName(@PathParam("empno") long empno) {    Employee e = em.find(Employee.class, empno);    if (e == null) {      return "no data.";    } else {      return e.getFirstName();    }  }} ?????????????????????@Path("/emprest ")????????????RESTful Web????????????HTTP??????????????JAX-RS????????????????????????RESTful Web?????Web??????????????????@Produces???????(?)??????????????????????????text/plain????????????????????????????application/xml?????????XML???????????application/json?????JSON?????????????????? ???????????????Web???????????????????????????????????????·?????OOW???????????·???????????????Run As?-?Run on Server??????????????????Finish???????????????????Web??????http://localhost:7001/OOW/jaxrs/emprest/getname/186????????????????URL?????????(186)?employeeId?????????????firstName????????????????*    *    * ????????3??????WebLogic Server 12c?OEPE????Java EE 6?????????????????Java EE 6????????????????·????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????

    Read the article

  • JBoss5: Cannot deploy due to java.util.zip.ZipException: error in opening zip file

    - by Andreas
    I have a web client and a EJB project, which I created with Eclipse 3.4. When I want to deploy it on Jboss 5.0.1, I receive the error below. I searched a lot but I wasn't able to find a solution to this. 18:21:21,899 INFO [ServerImpl] Starting JBoss (Microcontainer)... 18:21:21,900 INFO [ServerImpl] Release ID: JBoss [Morpheus] 5.0.1.GA (build: SVNTag=JBoss_5_0_1_GA date=200902231221) 18:21:21,900 INFO [ServerImpl] Bootstrap URL: null 18:21:21,900 INFO [ServerImpl] Home Dir: /Applications/jboss-5.0.1.GA 18:21:21,900 INFO [ServerImpl] Home URL: file:/Applications/jboss-5.0.1.GA/ 18:21:21,901 INFO [ServerImpl] Library URL: file:/Applications/jboss-5.0.1.GA/lib/ 18:21:21,901 INFO [ServerImpl] Patch URL: null 18:21:21,901 INFO [ServerImpl] Common Base URL: file:/Applications/jboss-5.0.1.GA/common/ 18:21:21,902 INFO [ServerImpl] Common Library URL: file:/Applications/jboss-5.0.1.GA/common/lib/ 18:21:21,902 INFO [ServerImpl] Server Name: default 18:21:21,902 INFO [ServerImpl] Server Base Dir: /Applications/jboss-5.0.1.GA/server 18:21:21,902 INFO [ServerImpl] Server Base URL: file:/Applications/jboss-5.0.1.GA/server/ 18:21:21,902 INFO [ServerImpl] Server Config URL: file:/Applications/jboss-5.0.1.GA/server/default/conf/ 18:21:21,902 INFO [ServerImpl] Server Home Dir: /Applications/jboss-5.0.1.GA/server/default 18:21:21,902 INFO [ServerImpl] Server Home URL: file:/Applications/jboss-5.0.1.GA/server/default/ 18:21:21,903 INFO [ServerImpl] Server Data Dir: /Applications/jboss-5.0.1.GA/server/default/data 18:21:21,903 INFO [ServerImpl] Server Library URL: file:/Applications/jboss-5.0.1.GA/server/default/lib/ 18:21:21,903 INFO [ServerImpl] Server Log Dir: /Applications/jboss-5.0.1.GA/server/default/log 18:21:21,903 INFO [ServerImpl] Server Native Dir: /Applications/jboss-5.0.1.GA/server/default/tmp/native 18:21:21,903 INFO [ServerImpl] Server Temp Dir: /Applications/jboss-5.0.1.GA/server/default/tmp 18:21:21,903 INFO [ServerImpl] Server Temp Deploy Dir: /Applications/jboss-5.0.1.GA/server/default/tmp/deploy 18:21:22,669 INFO [ServerImpl] Starting Microcontainer, bootstrapURL=file:/Applications/jboss-5.0.1.GA/server/default/conf/bootstrap.xml 18:21:23,535 INFO [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.plugins.cache.CombinedVFSCache] 18:21:23,541 INFO [VFSCacheFactory] Using VFSCache [CombinedVFSCache[real-cache: null]] 18:21:23,942 INFO [CopyMechanism] VFS temp dir: /Applications/jboss-5.0.1.GA/server/default/tmp 18:21:23,943 INFO [ZipEntryContext] VFS force nested jars copy-mode is enabled. 18:21:26,263 INFO [ServerInfo] Java version: 1.5.0_16,Apple Inc. 18:21:26,264 INFO [ServerInfo] Java Runtime: Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284) 18:21:26,264 INFO [ServerInfo] Java VM: Java HotSpot(TM) Server VM 1.5.0_16-133,Apple Inc. 18:21:26,264 INFO [ServerInfo] OS-System: Mac OS X 10.5.6,i386 18:21:26,336 INFO [JMXKernel] Legacy JMX core initialized 18:21:30,432 INFO [ProfileServiceImpl] Loading profile: default from: org.jboss.system.server.profileservice.repository.SerializableDeploymentRepository@e1d5d9(root=/Applications/jboss-5.0.1.GA/server, key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,name=default]) 18:21:30,436 INFO [ProfileImpl] Using repository:org.jboss.system.server.profileservice.repository.SerializableDeploymentRepository@e1d5d9(root=/Applications/jboss-5.0.1.GA/server, key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,name=default]) 18:21:30,436 INFO [ProfileServiceImpl] Loaded profile: ProfileImpl@ae002e{key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,name=default]} 18:21:32,935 INFO [WebService] Using RMI server codebase: http://localhost:8083/ 18:21:42,572 INFO [NativeServerConfig] JBoss Web Services - Stack Native Core 18:21:42,573 INFO [NativeServerConfig] 3.0.5.GA 18:21:52,836 ERROR [AbstractKernelController] Error installing to ClassLoader: name=vfsfile:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/ state=Describe mode=Manual requiredState=ClassLoader org.jboss.deployers.spi.DeploymentException: Error creating classloader for vfsfile:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/ at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployers.structure.spi.helpers.AbstractDeploymentContext.createClassLoader(AbstractDeploymentContext.java:576) at org.jboss.deployers.structure.spi.helpers.AbstractDeploymentUnit.createClassLoader(AbstractDeploymentUnit.java:159) at org.jboss.deployers.spi.deployer.helpers.AbstractClassLoaderDeployer.deploy(AbstractClassLoaderDeployer.java:53) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1598) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1062) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:698) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.loadProfile(ProfileServiceBootstrap.java:304) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:205) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:405) at org.jboss.Main.boot(Main.java:209) at org.jboss.Main$1.run(Main.java:547) at java.lang.Thread.run(Thread.java:613) Caused by: java.lang.Error: Error visiting FileHandler@5567366[path=TwitterEAR.ear/TwitterPoCEJB.jar context=file:/Applications/jboss-5.0.1.GA/server/default/deploy/ real=file:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/TwitterPoCEJB.jar/] at org.jboss.classloading.plugins.vfs.PackageVisitor.determineAllPackages(PackageVisitor.java:98) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.determineCapabilities(VFSDeploymentClassLoaderPolicyModule.java:108) at org.jboss.classloading.spi.dependency.Module.getCapabilities(Module.java:654) at org.jboss.classloading.spi.dependency.Module.determinePackageNames(Module.java:713) at org.jboss.classloading.spi.dependency.Module.getPackageNames(Module.java:698) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.determinePolicy(VFSDeploymentClassLoaderPolicyModule.java:129) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.determinePolicy(VFSDeploymentClassLoaderPolicyModule.java:48) at org.jboss.classloading.spi.dependency.policy.ClassLoaderPolicyModule.getPolicy(ClassLoaderPolicyModule.java:195) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.getPolicy(VFSDeploymentClassLoaderPolicyModule.java:122) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.getPolicy(VFSDeploymentClassLoaderPolicyModule.java:48) at org.jboss.classloading.spi.dependency.policy.ClassLoaderPolicyModule.registerClassLoaderPolicy(ClassLoaderPolicyModule.java:131) at org.jboss.deployers.plugins.classloading.AbstractLevelClassLoaderSystemDeployer.createClassLoader(AbstractLevelClassLoaderSystemDeployer.java:120) at org.jboss.deployers.structure.spi.helpers.AbstractDeploymentContext.createClassLoader(AbstractDeploymentContext.java:562) ... 21 more Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.virtual.plugins.context.AbstractExceptionHandler.handleZipEntriesInitException(AbstractExceptionHandler.java:39) at org.jboss.virtual.plugins.context.helpers.NamesExceptionHandler.handleZipEntriesInitException(NamesExceptionHandler.java:63) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.ensureEntries(ZipEntryContext.java:610) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.checkIfModified(ZipEntryContext.java:757) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.getChildren(ZipEntryContext.java:829) at org.jboss.virtual.plugins.context.zip.ZipEntryHandler.getChildren(ZipEntryHandler.java:159) at org.jboss.virtual.plugins.context.DelegatingHandler.getChildren(DelegatingHandler.java:121) at org.jboss.virtual.plugins.context.AbstractVFSContext.getChildren(AbstractVFSContext.java:211) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:328) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:298) at org.jboss.virtual.VFS.visit(VFS.java:433) at org.jboss.virtual.VirtualFile.visit(VirtualFile.java:437) at org.jboss.virtual.VirtualFile.getChildren(VirtualFile.java:386) at org.jboss.virtual.VirtualFile.getChildren(VirtualFile.java:367) at org.jboss.classloading.plugins.vfs.PackageVisitor.visit(PackageVisitor.java:200) at org.jboss.virtual.plugins.vfs.helpers.WrappingVirtualFileHandlerVisitor.visit(WrappingVirtualFileHandlerVisitor.java:62) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:353) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:298) at org.jboss.virtual.VFS.visit(VFS.java:433) at org.jboss.virtual.VirtualFile.visit(VirtualFile.java:437) at org.jboss.classloading.plugins.vfs.PackageVisitor.determineAllPackages(PackageVisitor.java:94) ... 33 more Caused by: java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.<init>(ZipFile.java:203) at java.util.zip.ZipFile.<init>(ZipFile.java:234) at org.jboss.virtual.plugins.context.zip.ZipFileWrapper.ensureZipFile(ZipFileWrapper.java:175) at org.jboss.virtual.plugins.context.zip.ZipFileWrapper.acquire(ZipFileWrapper.java:245) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.initEntries(ZipEntryContext.java:470) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.ensureEntries(ZipEntryContext.java:603) ... 51 more 18:21:56,772 INFO [JMXConnectorServerService] JMX Connector server: service:jmx:rmi://localhost/jndi/rmi://localhost:1090/jmxconnector 18:21:56,959 INFO [MailService] Mail Service bound to java:/Mail 18:21:59,450 WARN [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RISK. It has been detected that the MessageSucker component which sucks messages from one node to another has not had its password changed from the installation default. Please see the JBoss Messaging user guide for instructions on how to do this. 18:21:59,489 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent 18:21:59,789 INFO [TransactionManagerService] JBossTS Transaction Service (JTA version) - JBoss Inc. 18:21:59,789 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer 18:22:00,040 INFO [TransactionManagerService] Initializing recovery manager 18:22:00,160 INFO [TransactionManagerService] Recovery manager configured 18:22:00,160 INFO [TransactionManagerService] Binding TransactionManager JNDI Reference 18:22:00,184 INFO [TransactionManagerService] Starting transaction recovery manager 18:22:01,243 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on http-localhost%2F127.0.0.1-8080 18:22:01,244 INFO [AjpProtocol] Initializing Coyote AJP/1.3 on ajp-localhost%2F127.0.0.1-8009 18:22:01,244 INFO [StandardService] Starting service jboss.web 18:22:01,247 INFO [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.2.GA 18:22:01,336 INFO [Catalina] Server startup in 161 ms 18:22:01,360 INFO [TomcatDeployment] deploy, ctxPath=/invoker 18:22:02,014 INFO [TomcatDeployment] deploy, ctxPath=/web-console 18:22:02,459 INFO [TomcatDeployment] deploy, ctxPath=/jbossws 18:22:02,570 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml 18:22:02,586 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml 18:22:02,645 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml 18:22:02,663 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml 18:22:02,705 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml 18:22:02,801 INFO [SimpleThreadPool] Job execution threads will use class loader of thread: main 18:22:02,850 INFO [QuartzScheduler] Quartz Scheduler v.1.5.2 created. 18:22:02,857 INFO [RAMJobStore] RAMJobStore initialized. 18:22:02,858 INFO [StdSchedulerFactory] Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 18:22:02,858 INFO [StdSchedulerFactory] Quartz scheduler version: 1.5.2 18:22:02,859 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. 18:22:03,888 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name 'java:DefaultDS' 18:22:04,530 INFO [ServerPeer] JBoss Messaging 1.4.1.GA server [0] started 18:22:04,624 INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pageSize=2000, downCacheSize=2000 18:22:04,632 WARN [ConnectionFactoryJNDIMapper] supportsFailover attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory will *not* support failover 18:22:04,632 WARN [ConnectionFactoryJNDIMapper] supportsLoadBalancing attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory will *not* support load balancing 18:22:04,742 INFO [ConnectionFactory] Connector bisocket://localhost:4457 has leasing enabled, lease period 10000 milliseconds 18:22:04,742 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory@6af9ad started 18:22:04,746 INFO [QueueService] Queue[/queue/ExpiryQueue] started, fullSize=200000, pageSize=2000, downCacheSize=2000 18:22:04,747 INFO [ConnectionFactory] Connector bisocket://localhost:4457 has leasing enabled, lease period 10000 milliseconds 18:22:04,747 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory@5ac953 started 18:22:04,750 INFO [ConnectionFactory] Connector bisocket://localhost:4457 has leasing enabled, lease period 10000 milliseconds 18:22:04,750 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory@e8fa3a started 18:22:05,050 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name 'java:JmsXA' 18:22:05,073 INFO [TomcatDeployment] deploy, ctxPath=/ 18:22:05,178 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console 18:22:05,290 ERROR [ProfileServiceBootstrap] Failed to load profile: Summary of incomplete deployments (SEE PREVIOUS ERRORS FOR DETAILS): DEPLOYMENTS IN ERROR: Deployment "vfsfile:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/" is in error due to the following reason(s): java.util.zip.ZipException: error in opening zip file 18:22:05,301 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-localhost%2F127.0.0.1-8080 18:22:05,364 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-localhost%2F127.0.0.1-8009 18:22:05,373 INFO [ServerImpl] JBoss (Microcontainer) [5.0.1.GA (build: SVNTag=JBoss_5_0_1_GA date=200902231221)] Started in 43s:467ms The mentioned ear and war file are both in the deploy directory. Does anybody have hints?

    Read the article

  • JasperReport Issue using with struts2 - getting null in final PDF file

    - by Nirmal
    Hello, i am writing my jasper report problem with struts2. Following is the code that i am trying to execute : struts.xml contains : <action name="myJasperTest1" class="temp.JasperAction1"> <result name="success" type="jasper"> <param name="location">/jasper/our_compiled_template.jasper</param> <param name="dataSource">myList</param> <param name="format">PDF</param> </result> </action> My JasperAction1 contains : import java.util.ArrayList; import java.util.List; import com.sufalam.business.model.util.LegacyJasperInputStream; import net.sf.jasperreports.engine.JasperCompileManager; import com.opensymphony.xwork2.ActionSupport; import com.sufalam.business.finance.model.bean.Account; import java.io.FileInputStream; import net.sf.jasperreports.engine.design.JasperDesign; import net.sf.jasperreports.engine.xml.JRXmlLoader; public class JasperAction1 extends ActionSupport { /** List to use as our JasperReports dataSource. */ private List<Account> myList; public String execute() throws Exception { // Create some imaginary persons. Account a1 = new Account(); Account a2 = new Account(); a1.setId(77); a1.setName("aaa"); a2.setId(88); a2.setName("bbb"); // Store people in our dataSource list (normally would come from database). myList = new ArrayList<Account>(); myList.add(a1); myList.add(a2); // Normally we would provide a pre-compiled .jrxml file // or check to make sure we don't compile on every request. try { JasperDesign design = JRXmlLoader.load( new LegacyJasperInputStream(new FileInputStream("F://backup//backup 26-5(final acegi)//SufalamERP//build//web//jasper//report2.jrxml"))); JasperCompileManager.compileReportToFile(design,"F://backup//backup 26-5(final acegi)//SufalamERP//build//web//jasper//our_compiled_template1.jasper"); } catch (Exception e) { e.printStackTrace(); return ERROR; } return SUCCESS; } public List<Account> getMyList() { return myList; } } I am using ireport plugin of Netbeans for generation .jrxml file. After Designing my page using IReport wizard my our_jasper_template.jrxml file contains following code : <?xml version="1.0" encoding="UTF-8"?> <jasperReport xmlns="http://jasperreports.sourceforge.net/jasperreports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jasperreports.sourceforge.net/jasperreports http://jasperreports.sourceforge.net/xsd/jasperreport.xsd" name="null" pageWidth="595" pageHeight="842" columnWidth="535" leftMargin="20" rightMargin="20" topMargin="20" bottomMargin="20"> <queryString language="SQL"> <![CDATA[SELECT account_master."name" AS account_master_name, account_master."id" AS account_master_id FROM "public"."account_master" account_master]]> </queryString> <field name="account_master_name" class="java.lang.String"> <fieldDescription><![CDATA[]]></fieldDescription> </field> <field name="account_master_id" class="java.lang.Integer"> <fieldDescription><![CDATA[]]></fieldDescription> </field> <background> <band/> </background> <title> <band height="58"> <line> <reportElement x="0" y="8" width="555" height="1"/> </line> <line> <reportElement positionType="FixRelativeToBottom" x="0" y="51" width="555" height="1"/> </line> <staticText> <reportElement x="65" y="13" width="424" height="35"/> <textElement textAlignment="Center"> <font size="26" isBold="true"/> </textElement> <text><![CDATA[Classic template]]></text> </staticText> </band> </title> <pageHeader> <band/> </pageHeader> <columnHeader> <band/> </columnHeader> <detail> <band height="40"> <staticText> <reportElement x="0" y="0" width="139" height="20"/> <textElement> <font size="12"/> </textElement> <text><![CDATA[account_master_name]]></text> </staticText> <textField> <reportElement x="139" y="0" width="416" height="20"/> <textElement> <font size="12"/> </textElement> <textFieldExpression class="java.lang.String"><![CDATA[$F{account_master_name}]]></textFieldExpression> </textField> <staticText> <reportElement x="0" y="20" width="139" height="20"/> <textElement> <font size="12"/> </textElement> <text><![CDATA[account_master_id]]></text> </staticText> <textField> <reportElement x="139" y="20" width="416" height="20"/> <textElement> <font size="12"/> </textElement> <textFieldExpression class="java.lang.Integer"><![CDATA[$F{account_master_id}]]></textFieldExpression> </textField> </band> </detail> <columnFooter> <band/> </columnFooter> <pageFooter> <band height="26"> <textField evaluationTime="Report" pattern="" isBlankWhenNull="false"> <reportElement key="textField" x="516" y="6" width="36" height="19" forecolor="#000000" backcolor="#FFFFFF"/> <box> <topPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <leftPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <bottomPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <rightPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> </box> <textElement> <font size="10"/> </textElement> <textFieldExpression class="java.lang.String"><![CDATA["" + $V{PAGE_NUMBER}]]></textFieldExpression> </textField> <textField pattern="" isBlankWhenNull="false"> <reportElement key="textField" x="342" y="6" width="170" height="19" forecolor="#000000" backcolor="#FFFFFF"/> <box> <topPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <leftPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <bottomPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <rightPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> </box> <textElement textAlignment="Right"> <font size="10"/> </textElement> <textFieldExpression class="java.lang.String"><![CDATA["Page " + $V{PAGE_NUMBER} + " of "]]></textFieldExpression> </textField> <textField pattern="" isBlankWhenNull="false"> <reportElement key="textField" x="1" y="6" width="209" height="19" forecolor="#000000" backcolor="#FFFFFF"/> <box> <topPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <leftPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <bottomPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> <rightPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/> </box> <textElement> <font size="10"/> </textElement> <textFieldExpression class="java.util.Date"><![CDATA[new Date()]]></textFieldExpression> </textField> </band> </pageFooter> <summary> <band/> </summary> </jasperReport> Now the problem i am facing is when i execute this action class it gives me following output as pdf format :

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

  • WCF on Windows Phone 7 (Silverlight 4)

    - by Igor Zevaka
    Has anyone been able to communicate using WCF on Windows Phone Series 7 emulator? I've been trying for the past two days and it's just happening for me. I can get a normal Silverlight control to work in both Silverlight 3 and Silverlight 4, but not the phone version. Here are two versions that I've tried: Version 1 - Using Async Pattern BasicHttpBinding basicHttpBinding = new BasicHttpBinding(); EndpointAddress endpointAddress = new EndpointAddress("http://localhost/wcf/Authentication.svc"); Wcf.IAuthentication auth1 = new ChannelFactory<Wcf.IAuthentication>(basicHttpBinding, endpointAddress).CreateChannel(endpointAddress); AsyncCallback callback = (result) => { Action<string> write = (str) => { this.Dispatcher.BeginInvoke(delegate { //Display something }); }; try { Wcf.IAuthentication auth = result.AsyncState as Wcf.IAuthentication; Wcf.AuthenticationResponse response = auth.EndLogin(result); write(response.Success.ToString()); } catch (Exception ex) { write(ex.Message); System.Diagnostics.Debug.WriteLine(ex.Message); } }; auth1.BeginLogin("user0", "test0", callback, auth1); This version breaks on this line: Wcf.IAuthentication auth1 = new ChannelFactory<Wcf.IAuthentication>(basicHttpBinding, endpointAddress).CreateChannel(endpointAddress); Throwing System.NotSupportedException. The exception is not very descriptive and the callstack is equally not very helpful: at System.ServiceModel.DiagnosticUtility.ExceptionUtility.BuildMessage(Exception x) at System.ServiceModel.DiagnosticUtility.ExceptionUtility.LogException(Exception x) at System.ServiceModel.DiagnosticUtility.ExceptionUtility.ThrowHelperError(Exception e) at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address) at WindowsPhoneApplication2.MainPage.DoLogin() .... Version 2 - Blocking WCF call Here is the version that doesn't use the async pattern. [System.ServiceModel.ServiceContract] public interface IAuthentication { [System.ServiceModel.OperationContract] AuthenticationResponse Login(string user, string password); } public class WcfClientBase<TChannel> : System.ServiceModel.ClientBase<TChannel> where TChannel : class { public WcfClientBase(string name, bool streaming) : base(GetBinding(streaming), GetEndpoint(name)) { ClientCredentials.UserName.UserName = WcfConfig.UserName; ClientCredentials.UserName.Password = WcfConfig.Password; } public WcfClientBase(string name) : this(name, false) {} private static System.ServiceModel.Channels.Binding GetBinding(bool streaming) { System.ServiceModel.BasicHttpBinding binding = new System.ServiceModel.BasicHttpBinding(); binding.MaxReceivedMessageSize = 1073741824; if(streaming) { //binding.TransferMode = System.ServiceModel.TransferMode.Streamed; } /*if(XXXURLXXX.StartsWith("https")) { binding.Security.Mode = BasicHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None; }*/ return binding; } private static System.ServiceModel.EndpointAddress GetEndpoint(string name) { return new System.ServiceModel.EndpointAddress(WcfConfig.Endpoint + name + ".svc"); } protected override TChannel CreateChannel() { throw new System.NotImplementedException(); } } auth.Login("test0", "password0"); This version crashes in System.ServiceModel.ClientBase<TChannel> constructor. The call stack is a bit different: at System.Reflection.MethodInfo.get_ReturnParameter() at System.ServiceModel.Description.ServiceReflector.HasNoDisposableParameters(MethodInfo methodInfo) at System.ServiceModel.Description.TypeLoader.CreateOperationDescription(ContractDescription contractDescription, MethodInfo methodInfo, MessageDirection direction, ContractReflectionInfo reflectionInfo, ContractDescription declaringContract) at System.ServiceModel.Description.TypeLoader.CreateOperationDescriptions(ContractDescription contractDescription, ContractReflectionInfo reflectionInfo, Type contractToGetMethodsFrom, ContractDescription declaringContract, MessageDirection direction) at System.ServiceModel.Description.TypeLoader.CreateContractDescription(ServiceContractAttribute contractAttr, Type contractType, Type serviceType, ContractReflectionInfo& reflectionInfo, Object serviceImplementation) at System.ServiceModel.Description.TypeLoader.LoadContractDescriptionHelper(Type contractType, Type serviceType, Object serviceImplementation) at System.ServiceModel.Description.TypeLoader.LoadContractDescription(Type contractType) at System.ServiceModel.ChannelFactory1.CreateDescription() at System.ServiceModel.ChannelFactory.InitializeEndpoint(Binding binding, EndpointAddress address) at System.ServiceModel.ChannelFactory1..ctor(Binding binding, EndpointAddress remoteAddress) at System.ServiceModel.ClientBase1..ctor(Binding binding, EndpointAddress remoteAddress) at Wcf.WcfClientBase1..ctor(String name, Boolean streaming) at Wcf.WcfClientBase`1..ctor(String name) at Wcf.AuthenticationClient..ctor() at WindowsPhoneApplication2.MainPage.DoLogin() ... Any ideas?

    Read the article

  • Sharepoint Search crawl not working

    - by Satish
    Search Crawling is error out on my MOSS 2007 installation. I get the following error for all the web apps I have following error in Crawl logs. http://mysites.devserver URL could not be resolved. The host may be unavailable, or the proxy settings are not configured correctly on the index server. The Application Event log also has the following corresponding error The start address http://mysites.devserver cannot be crawled. Context: Application 'SSPMain', Catalog 'Portal_Content' Details: The URL of the item could not be resolved. The repository might be unavailable, or the crawler proxy settings are not configured. To configure the crawler proxy settings, use the Proxy and Timeout page in search administration. (0x80041221) I'm using Windows 2008 server. I tried accessing the site using the above mentioned url and its available. I did the registry setting for loop back issue found here http://support.microsoft.com/kb/896861 still not luck. Any Ideas?

    Read the article

  • How to resolve "svn: Can't find a temporary directory: Internal error"?

    - by HorusKol
    I have already googled the message, and I have plenty of disk space available on the SVN server (it's about 4% usage of 150 GB). I have noticed that when I try echo $TMPDIR at the command prompt on the SVN server I get nothing. What is making this a little confusing is that I only get this message from one location when I do an svn diff (that I've tested so far) - this error is not coming up when I try from three other computers (one of which is testing against the exact same repository, the other two are different repositories on the same svn server). About the only difference I can see is that the broken working copy is connecting to the server by an IP address where all the others are using a server name (although this resolves over DNS to the same IP Address). I'm hoping that I don't have to scratch the broken working copy and checkout a new one - unfortunately, this is a legacy project and not all changes have been properly revisioned.

    Read the article

  • Exchange 2010 periodically stops responding to SMTP events with error 421 4.4.1 Connection timed out

    - by Michael Shimmins
    After some help diagnosing why Exchange 2010 Enterprise stops responding to SMTP events. I can't find a pattern to it. It doesn't appear to be an actual timeout, as the server responds immediately with the error. To reproduce it I telnet into the server on port 25 and issue a EHLO. The server immediately replies with the 421: 421 4.4.1 Connection timed out Once this starts happening I've found restarting the exchange box is the only reliable way to get it flowing again. Sometimes restarting the Transport service or the mailbox attendant service seems to fix it, but this could be coincidental as it often has no effect.

    Read the article

  • HTTP response time profiling

    - by Sparsh Gupta
    Hello I have a nginx reverse proxy. The server is close to serving 600-700 requests per second. I have a Munin HTTP load time plugin which is outputting this: http://monitor.wingify.com/munin/visualwebsiteoptimizer.com/lb1.visualwebsiteoptimizer.com-http_loadtime.html Now, the problem is I am seeing some spikes in the graph. Expected response times should always be under 200ms. I am keeping an eye on syslog and messages but I am unable to figure out the actual cause of this. I was wondering if there is any good HTTP response time profiling system which I can install / embed with this nginx server and get a detailed reports / logs on the breakup of time taken by different things and what exactly is the cause of the spikes. The profiling system would also help me understand bottlenecks and how can I further optimize the latency. Most important right now is to investigate the cause of the spikes in the HTTP load time graphs (similar pattern is reported by external monitors - Pingdom) and to fix it to get consistent response times Thanks

    Read the article

  • WOL for Asus M5A97 built in Realtek Network Adapter

    - by madphp
    I cant get wake on lan to work for my built in network adapter. Its a ASUS M5A97 motherboard, and the network adapter is a Realtek PCIe GBE. I have Shutdown WakeOnLan - Enabled Wake on Magic Packet - Enabled Wake on Pattern Match - Enabled WOL & Shutdown Link Speed - 10 Mbps First I have set up a magic packet client to listen, and the packet is getting through. I have also checked these in Power Management for the network adapter. Allow the computer to turn off this device to save power Allow this device to wake the computer Allow a magic packet to wake the computer.

    Read the article

  • Microsoft Office documents collaboration - Open Source alternative

    - by Saggi Malachi
    I am looking for a good solution to collaborate on Microsoft Office documents, we currently just edit directly on a Samba share but it's one big mess because sometimes people leave the office with their laptops while docs are open so swap files remain there and then you nobody is sure what's going on. Is there any good and simple open source solution based on Linux? I've tried Alfresco but it is much more than what I need, we got an internal wiki for most collaboration and I just need some solution for the stuff we need to do in Microsoft Office (mostly Excel files, the rest is in the wiki) EDIT: Some more info as requested - we are very small group, 4 full time employees and a few freelancers. The best idea I've got so far is just managing it in a subversion repository with a Lock-Modify-Lock policy but I'd love to hear about better solutions. Thanks!

    Read the article

  • ASP.NET MVC jquery.UI dialog - How to validate the dialog's input on server and return error?

    - by Rick
    I am using jQuery1.4.2, ASP.NET MVC 2 and jQuery.UI-1.8. I am creating a data input dialog which works OK when all the data is valid, but I want to validate the input data on the server and return an error to the dialog describing the error and I am not quite sure how to do that and keep the dialog open. The dialog is opened when a link is clicked. The solution may be to try to bypass more of the MVC framework's default binding that handles the submit button clicks and creates the expected ProfilePermission object and calls the Controller's AddPermission POST Action method, but I was hoping there may be an easier way without have to write more jquery/javascript code to handle the button clicks and pass the data to the server. My script code looks like $("#dialog").dialog({ modal: true, position: ['center', 180], width: 500, height: 130, autoOpen: false }); $(".addPermissionDialog").click(function (event) { event.preventDefault(); $("#dialog").dialog('open'); return false; }); My View <div id="dialog" title="Add Permission"> <%: Html.ValidationSummary("") %> <% using (Html.BeginForm("AddPermission", "Profile")) { %> <%: Html.Hidden("PersonId") %> <%: Html.Hidden("ProfileId") %> <div class="editor-label"> <label for="PersonName">User Name:</label> <%: Html.TextBox("PersonName")%> <label for="PermissionType">Permission:</label> <select name="PermissionTypeId" id="PermissionTypeId" > <option value="2">Edit</option> <option value="3">View</option> </select> </div> <br /> <p> <input type="submit" name="saveButton" value="Add Permission" /> <input type="submit" id="cancelButton" name="cancelButton" value="Cancel" /> <script type="text/javascript"> document.getElementById("cancelButton").disableValidation = true; </script> </p> <% } %> </div> <br /> <p> <%: Html.ActionLink("Add Permission", "AddPermission", new { profileId = Model.First().ProfileId }, new { @class = "addPermissionDialog" })%> </p> My Controller action [AcceptVerbs("Post")] [HandleError] public ActionResult AddPermission(string cancelButton, ProfilePermission profilePermission) { ViewData["Controller"] = controllerName; ViewData["CurrentCategory"] = "AddPermission"; ViewData["ProfileId"] = profilePermission.ProfileId; PermissionTypes permission = repository.GetAccessRights(profilePermission.ProfileId); if (permission == PermissionTypes.View || permission == PermissionTypes.None) { ViewData["Message"] = "You do not have access rights (Edit or Owner permissions) to this profile"; return View("Error"); } // If cancel return to previous page if (cancelButton != null) { return RedirectToAction("ManagePermissions", new { profileId = profilePermission.ProfileId }); } if (ModelState.IsValid) { repository.SavePermission(profilePermission); return RedirectToAction("ManagePermissions", new { profileId = profilePermission.ProfileId }); } // IF YOU GET HERE THERE WAS AN ERROR return PartialView(profilePermission); // The desire is to redisplay the dialog with error message }

    Read the article

  • URL Rewrite is adding HTTPS to my canonical redirects in IIS7

    - by Derek Hunziker
    Hello, I have the following rule defined in my Web.config: <rule name="Enforce canonical hostname" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" negate="true" pattern="^www\.mydomain\.org$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/" redirectType="Permanent" /> </rule> What I am experiencing is strange... It appears that I am being redirected to https://www.mydomain.com/ which causes my browser to hang. I do not have SSL encryption turned on, nor do I have any special authorization rules. The web server in question is behind an F5 load balancer. Any ideas?

    Read the article

  • Ninject 3.0 MVC kernel.bind error Auto Registration

    - by user295734
    Getting and error on kernel.Bind(scanner = ... "scanner" has the little error line under it in VS 2010. Cannot convert lambda expression to type 'System.Type[]' because it is not a delegate type Tyring to Auto Register like the old kernel.scan in 2.0. I can not figure out what i am doing wrong. Added and removed so many Ninject packages. completely lost, getting to be a big waste of time. using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Web.Common; //using Ninject.Extensions.Conventions; using Ninject.Web.WebApi; using Ninject.Web.Mvc; using CommonServiceLocator.NinjectAdapter; using System.Reflection; using System.IO; using LR.Repository; using LR.Repository.Interfaces; using LR.Service.Interfaces; using System.Web.Http; public static class NinjectWebCommon { private static readonly Bootstrapper bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { kernel.Bind(scanner => scanner.FromAssembliesInPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)) .Select(IsServiceType) .BindToDefaultInterface() .Configure(binding => binding.InSingletonScope()) ); } private static bool IsServiceType(Type type) { // temp return true; // .Any() is not recognized either. return true; // type.IsClass && type.GetInterfaces().Any(intface => intface.Name == "I" + type.Name); }

    Read the article

  • Repairing yum's repositories on a RHEL5.

    - by The Rook
    I am using RHEL5 and yum is missing many packages, such as apache, php, and all php libraries . I have added the rpmforge repository, but i am still missing these packages. This is an i686 machine and there might not be many i686 packages available, I think that if i force an i386 i'll have serious problems. How do I make sure I have a large number of compatible packages on a RHEL5 system? I didn't install this system, is it normal for RHEL5 to have virtually no useful packages in yum? How do RHEL5 administrators use yum without introducing conflicts with currently installed packages? Should I ditch yum and use apt? Thanks!

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >