Search Results

Search found 10644 results on 426 pages for 'flash integration'.

Page 412/426 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • Translating Your Customizations

    - by Richard Bingham
    This blog post explains the basics of translating the customizations you can make to Fusion Applications products, with the inclusion of information for both composer-based customizations and the generic design-time customizations done via JDeveloper. Introduction Like most Oracle Applications, Fusion Applications installs on-premise with a US-English base language that is, in Release 7, supported by the option to add up to a total of 22 additional language packs (In Oracle Cloud production environments languages are pre-installed already). As such many organizations offer their users the option of working with their local language, and logically that should also apply for any customizations as well. Composer-based UI Customizations Customizations made in Page Composer take into consideration the session LOCALE, as set in the user preferences screen, during all customization work, and stores the customization in the MDS repository accordingly. As such the actual new or changed values used will only apply for the same language under which the customization was made, and text for any other languages requires a separate upload. See the Resource Bundles section below, which incidentally also applies to custom UI changes done in JDeveloper. You may have noticed this when you select the “Select Text Resource” menu option when editing the text on a page. Using this ensures that the resource bundles are used, whereas if you define a static value in Expression Builder it will never be available for translation. Notice in the screenshot below the “What’s New” custom value I have already defined using the ‘Select Text Resource’ feature is internally using the adfBundle groovy function to pull the custom value for my key (RT_S_1) from the ComposerOverrideBundle. Figure 1 – Page Composer showing the override bundle being used. Business Objects Customizing the Business Objects available in the Applications Composer tool for the CRM products, such as adding additional fields, also operates using the session language. Translating these additional values for these fields into other installed languages requires loading additional resource bundles, again as described below. Reports and Analytics Most customizations to Reports and BI Analytics are just essentially reorganizations and visualizations of existing number and text data from the system, and as such will use the appropriate values based on the users session language. Where a translated value or string exists for that session language, it will be used without the need for additional work. Extending through the addition of brand new reports and analytics requires another method of loading the translated strings, as part of what is known as ‘Localizing’ the BI Catalog and Metadata. This time it is via an export/import of XML data through the BI Administrators console, and is described in the OBIEE Admin Guide. Fusion Applications reports based on BI Publisher are already defined in template-per-locale, and in addition provide an extra process for getting the data for translation and reloading. This again uses the standard resource bundle format. Loading a custom report is illustrated in this video from our YouTube channel which shows the screen for both setting the template local and running an export for translation. Fusion Applications Menus Whilst the seeded Navigator and Global Menu values are fully translated when the additional language is installed, if they are customized then the change or new menu item will apply universally, not currently per language. This is set to change in a future release with the new UI Text Editor feature described below. More on Resource Bundles As mentioned above, to provide translations for most of your customizations you need to add values to a resource bundle. This is an industry open standard (OASIS) format XML file with the extension .xliff, and store translated values for the strings used by ADF at run-time. The general process is that these values are exported from the MDS repository, manually edited, and then imported back in again.This needs to be done by an administrator, via either WLST commands or through Enterprise Manager as per the screenshot below. This is detailed out in the Fusion Applications Extensibility Guide. For SaaS environments the Cloud Operations team can assist. Figure 2 – Enterprise Manager’s MDS export used getting resource bundles for manual translation and re-imported on the same screen. All customized strings are stored in an override bundle (xliff file) for each locale, suffixed with the language initials, with English ones being saved to the default. As such each language bundle can be easily identified and updated. Similarly if you used JDeveloper to create your own applications as extensions to Fusion Applications you would use the native support for resource bundles, and add them into the faces-config.xml file for inclusion in your application. An example is this ADF customization video from our YouTube channel. JDeveloper also supports automatic synchronization between your underlying resource bundles and any translatable strings you add – very handy. For more information see chapters on “Using Automatic Resource Bundle Integration in JDeveloper” and “Manually Defining Resource Bundles and Locales” in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework. FND Messages and Look-ups FND Messages, as defined here, are not used for UI labels (they are known as ‘strings’), but are the responses back to users as a result of an action, such as from a page submit. Each ‘message’ is defined and stored in the related database table (FND_MESSAGES_B), with another (FND_MESSAGES_TL) holding any language-specific values. These come seeded with the additional language installs, however if you customize the messages via the “Manage Messages” task in Functional Setup Manager, or add new ones, then currently (in Release 7) you’ll need to repeat it for each language. Figure 3 – An FND Message defined in an English user session. Similarly Look-ups are stored in a translation table (FND_LOOKUP_VALUES_TL) where appropriate, and can be customized by setting the users session language and making the change  in the Setup and Maintenance task entitled “Manage [Standard|Common] Look-ups”. Online Help Yes, in fact all the seeded help is applied as part of each language pack install as part of the post-install provisioning process. If you are editing or adding custom online help then the Create Help screen provides a drop-down of which language your help customization will apply to. This is shown in the video below from our YouTube channel, and obviously you’ll need to it for each language in use. What is Coming for Translations? Currently planned for Release 8 is something called the User Interface (UI) Text Editor. This tool will allow the editing of all the text shown on the pages and forms of Fusion Application. This will provide a search based on a particular term or word, say “Worker”, and will allow it to be adjusted, say to “Employee”, which then updates all the Resource Bundles that contain it. In the case of multi-language environments, it will use the users session language (locale) to know which Resource Bundles to apply the change to. This capability will also support customization sandboxes, to help ensure changes can be tested and approved.  It is also interesting to note that the design currently allows any page-specific customizations done using Page Composer or Application Composer to over-write the global changes done via the UI Text Editor, allowing for special context-sensitive values to still be used. Further Reading and Resources The following short list provides the mains resources for digging into more detail on translation support for both Composer and JDeveloper customization projects. There is a dedicated chapter entitled “Translating Custom Text” in the Fusion Applications Extensibility Guide. This has good examples and steps for many tasks, especially administering resource bundles. Using localization formatting (numbers, dates etc) for design-time changes is well documented in the Fusion Applications Developer Guide. For more guidelines on general design-time globalization, see either the ‘Internationalizing and Localizing Pages’ chapter in the Oracle Fusion Middleware Web User Interface Developer’s Guide for Oracle Application Development Framework (Oracle Fusion Applications Edition) or the general Oracle Database Globalization Support Guide. The Oracle Architecture ‘A-Team’ provided a recent post on customizing the user session timeout popup, using design-time changes to resource bundles. It has detailed step-by-step examples which can be a useful illustration.

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Google Analytics on Android

    - by pjv
    There is a specific and official analytics SDK for native Android apps (note that I'm not talking about webpages in apps on a phone). This library basically sends pages and events to Google Analytics and you can view your analytics in exactly the same dashboard as for websites. Since my background is apps rather than websites, and since a lot of the Google Analytics terminology seems particularly inapplicable to a native app, I need some pointers. Please discuss my remarks, provide some clarification where you think I'm off-track, and above all share good experiences! 1. Page Views Pages mostly can match different Activities (and Dialogs) being displayed. Activities can be visible behind non-full-screen Activities however, though only the top-level Activity can be interacted. This sort-off clashes with a "(page) view". You'd also want at least one page view for each visit and therefore put one page view tracker in the Application class. However this does not constitute a window or sorts. Usually an Activity will open at the same time, so the time spent on that page will have been 0. This will influence your "time spent" statistics. How are these counted anyway? Moreover, there is a loose coupling between the Activities, by means of Intents. A user can, much like on any website, step in at any Activity, although usually this then concerns resuming the application where he left off. This makes that the hierarchy of Activities usually is very flat. And since there are no url's involved. What meaning would using slashes in page titles have, such as "/Home"? All pages would appear on an equal level in the reports, so no content drilldown. Non-unique page views seem to be counted as some kind of indicator of successfulness: how often does the visitor revisit the page. When the user rotates the screen however usually an Activity resumes again, thus making it a new page view. This happens a lot. Maybe a well-thought-through placement of the call might solve this, or placing several, I'm not sure. How to deal with Page Views? 2. Events I'd say there are two sorts: A user event Something that happened, usually as an indirect consequence of the above. The latter particularly is giving me headaches. First of all, many events aren't written in code any more, but pieced logically together by means of Intents. This means that there is no place to put the analytics call. You'd either have to give up this advantage and start doing it the old-fashioned way in favor of good analytics, or, just be missing some events. Secondly, as a developer you're not so much interested in when a user clicks a button, but if the action that should have been performed really was performed and what the result was. There seems to be no clear way to get resulting data into Google Analytics (what's up with the integers? I want to put in Strings!). The same that applies to the flat pages hierarchy, also goes for the event categories. You could do "vertical" categories (topically, that is), but some code is shared "horizontally" and the tracking will be equally shared. Just as with the Intents mechanism, inheritance makes it hard for you to put the tracking in the right places at all times. And I can't really imagine "horizontal" categories. Unless you start making really small categories, such as all the items form the same menu in one category, I have a hard time grasping the concept. Finally, how do you deal with cancelling? Usually you both have an explicit cancel mechanism by ways of a button, as well as the implicit cancel when the "back"-button is pressed to leave the activity and there were no changes. The latter also applies to "saves", when the back button is pressed and there ARE changes. How are you consequently going to catch all these if not by doing all the "back"-button work yourself? How to deal with events? 3. Goals For goal types I have choice of: URL Destination, Time on Site, and Pages/Visit. Most apps don't have a funnel that leads the user to some "registration done" or "order placed" page. Apps have either already been bought (in which case you want to stimulate the user to love your app, so that he might bring on new buyers) or are paid for by in-app ads. So URL Destination is not a very important goal. Time on Site also seems troublesome. First, I have some doubt on how this would be measured. Second, I don't necessarily want my user to spend a lot of time in my already paid app, just be active and content. Equivalently, why not mention how frequent a user uses your app? Regarding Pages/Visit I already mentioned how screen orientation changes blow up the page view numbers. In an app I'd be most interested in events/visit to measure the user's involvement/activity. If he's intensively using the app then he must be loving it right? Furthermore, I also have some small funnels (that do not lead to conversion though) that I want to see streamlined. In my mind those funnels would end in events rather than page views but that seems not to be possible. I could also measure clickthroughs on in-app ads, but then I'd need to track those as Page Views rather than Events, in view of "URL Destination". What are smart goals for apps and how can you fit them on top of Analytics? 4. Optimisation Is there a smart way to manually do what "Website Optimiser" does for websites? Most importantly, how would I track different landing page designs? 5. Traffic Sources Referrals deal with installation time referrals, if you're smart enough to get them included. But perhaps I'd also want to get some data which third-party app sends users to my app to perform some actions (this app interoperability is possible via Intents). Many of the terminologies related to "Traffic Sources" seem totally meaningless and there is no possibility of connecting in AdSense. What are smart uses of this data? 6. Visitors Of the "Browser capabilities", "Network Properties" and "Mobile" tabs, many things are pointless as they have no influence on / relation with my mostly offline app that won't use flash anyway. Only if you drill down far enough, can you get to OS versions, which do matter a lot. I even forgot where you could check what exact Android devices visited. What are smart uses of this data? How can you make the relevant info more prominent? 7. Other No in-page analytics. I have to register my app as a web-url (What!?)?

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • Delphi Indy IDHTTP question

    - by user327175
    I am trying to get the captcha image from a AOL, and i keep getting an error 418. unit imageunit; /// /// h t t p s://new.aol.com/productsweb/ /// interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, IdIOHandler, IdIOHandlerSocket, IdIOHandlerStack, IdSSL, IdSSLOpenSSL, IdIntercept, IdZLibCompressorBase, IdCompressorZLib, IdCookieManager, IdBaseComponent, IdComponent, IdTCPConnection, IdTCPClient, IdHTTP,jpeg,GIFImg, ExtCtrls; type TForm2 = class(TForm) IdHTTP1: TIdHTTP; IdCookieManager1: TIdCookieManager; IdCompressorZLib1: TIdCompressorZLib; IdConnectionIntercept1: TIdConnectionIntercept; IdSSLIOHandlerSocketOpenSSL1: TIdSSLIOHandlerSocketOpenSSL; Panel1: TPanel; Image1: TImage; Panel2: TPanel; Button1: TButton; Edit1: TEdit; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form2: TForm2; implementation {$R *.dfm} procedure TForm2.Button1Click(Sender: TObject); var JPI : TJPEGImage; streamdata:TMemoryStream; begin streamdata := TMemoryStream.Create; try idhttp1.Get(edit1.Text, Streamdata); Except { Handle exceptions } On E : Exception Do Begin MessageDlg('Exception: '+E.Message,mtError, [mbOK], 0); End; End; //h t t p s://new.aol.com/productsweb/WordVerImage?20890843 //h t t p s://new.aol.com/productsweb/WordVerImage?91868359 /// /// gives error 418 unused /// streamdata.Position := 0; JPI := TJPEGImage.Create; Try JPI.LoadFromStream ( streamdata ); Finally Image1.Picture.Assign ( JPI ); JPI.Free; streamdata.Free; End; end; end. Form: object Form2: TForm2 Left = 0 Top = 0 Caption = 'Form2' ClientHeight = 247 ClientWidth = 480 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False PixelsPerInch = 96 TextHeight = 13 object Panel1: TPanel Left = 0 Top = 41 Width = 480 Height = 206 Align = alClient TabOrder = 0 ExplicitLeft = -8 ExplicitTop = 206 ExplicitHeight = 41 object Image1: TImage Left = 1 Top = 1 Width = 478 Height = 204 Align = alClient ExplicitLeft = 5 ExplicitTop = 17 ExplicitWidth = 200 ExplicitHeight = 70 end end object Panel2: TPanel Left = 0 Top = 0 Width = 480 Height = 41 Align = alTop TabOrder = 1 ExplicitLeft = 40 ExplicitTop = 32 ExplicitWidth = 185 object Button1: TButton Left = 239 Top = 6 Width = 75 Height = 25 Caption = 'Button1' TabOrder = 0 OnClick = Button1Click end object Edit1: TEdit Left = 16 Top = 8 Width = 217 Height = 21 TabOrder = 1 end end object IdHTTP1: TIdHTTP Intercept = IdConnectionIntercept1 IOHandler = IdSSLIOHandlerSocketOpenSSL1 MaxAuthRetries = 100 AllowCookies = True HandleRedirects = True RedirectMaximum = 100 ProxyParams.BasicAuthentication = False ProxyParams.ProxyPort = 0 Request.ContentLength = -1 Request.Accept = 'image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-s' + 'hockwave-flash, application/cade, application/xaml+xml, applicat' + 'ion/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-' + 'application, /' Request.BasicAuthentication = False Request.Referer = 'http://www.yahoo.com' Request.UserAgent = 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.1) Gecko/201001' + '22 firefox/3.6.1' HTTPOptions = [hoForceEncodeParams] CookieManager = IdCookieManager1 Compressor = IdCompressorZLib1 Left = 240 Top = 80 end object IdCookieManager1: TIdCookieManager Left = 360 Top = 136 end object IdCompressorZLib1: TIdCompressorZLib Left = 368 Top = 16 end object IdConnectionIntercept1: TIdConnectionIntercept Left = 304 Top = 72 end object IdSSLIOHandlerSocketOpenSSL1: TIdSSLIOHandlerSocketOpenSSL Intercept = IdConnectionIntercept1 MaxLineAction = maException Port = 0 DefaultPort = 0 SSLOptions.Mode = sslmUnassigned SSLOptions.VerifyMode = [] SSLOptions.VerifyDepth = 0 Left = 192 Top = 136 end end If you go to: h t t p s://new.aol.com/productsweb/ you will notice the captcha image has a url like: h t t p s://new.aol.com/productsweb/WordVerImage?91868359 I put that url in the edit box and get an error. What is wrong with this code.

    Read the article

  • Grails - Simple hasMany Problem - Using CheckBoxes rather than HTML Select in create.gsp

    - by gav
    My problem is this: I want to create a grails domain instance, defining the 'Many' instances of another domain that it has. I have the actual source in a Google Code Project but the following should illustrate the problem. class Person { String name static hasMany[skills:Skill] static constraints = { id (visible:false) skills (nullable:false, blank:false) } } class Skill { String name String description static constraints = { id (visible:false) name (nullable:false, blank:false) description (nullable:false, blank:false) } } If you use this model and def scaffold for the two Controllers then you end up with a form like this that doesn't work; My own attempt to get this to work enumerates the Skills as checkboxes and looks like this; But when I save the Volunteer the skills are null! This is the code for my save method; def save = { log.info "Saving: " + params.toString() def skills = params.skills log.info "Skills: " + skills def volunteerInstance = new Volunteer(params) log.info volunteerInstance if (volunteerInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'volunteer.label', default: 'Volunteer'), volunteerInstance.id])}" redirect(action: "show", id: volunteerInstance.id) log.info volunteerInstance } else { render(view: "create", model: [volunteerInstance: volunteerInstance]) } } This is my log output (I have custom toString() methods); 2010-05-10 21:06:41,494 [http-8080-3] INFO bumbumtrain.VolunteerController - Saving: ["skills":["1", "2"], "name":"Ian", "_skills":["", ""], "create":"Create", "action":"save", "controller":"volunteer"] 2010-05-10 21:06:41,495 [http-8080-3] INFO bumbumtrain.VolunteerController - Skills: [1, 2] 2010-05-10 21:06:41,508 [http-8080-3] INFO bumbumtrain.VolunteerController - Volunteer[ id: null | Name: Ian | Skills [Skill[ id: 1 | Name: Carpenter ] , Skill[ id: 2 | Name: Sound Engineer ] ]] Note that in the final log line the right Skills have been picked up and are part of the object instance. When the volunteer is saved the 'Skills' are ignored and not commited to the database despite the in memory version created clearly does have the items. Is it not possible to pass the Skills at construction time? There must be a way round this? I need a single form to allow a person to register but I want to normalise the data so that I can add more skills at a later time. If you think this should 'just work' then a link to a working example would be great. If I use the HTML Select then it works fine! Such as the following to make the Create page; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:select name="skills" from="${uk.co.bumbumtrain.Skill.list()}" multiple="yes" optionKey="id" size="5" value="${volunteerInstance?.skills}" /> </td> </tr> But I need it to work with checkboxes like this; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:each in="${skillInstanceList}" status="i" var="skillInstance"> <label for="${skillInstance?.name}"><g:message code="${skillInstance?.name}.label" default="${skillInstance?.name}" /></label> <g:checkBox name="skills" value="${skillInstance?.id.toString()}"/> </g:each> </td> </tr> The log output is exactly the same! With both style of form the Volunteer instance is created with the Skills correctly referenced in the 'Skills' variable. When saving, the latter fails with a null reference exception as shown at the top of this question. Hope this makes sense, thanks in advance! Gav

    Read the article

  • How to tell Seam to inject a local EJB interface (SLSB) and not the remote EJB interface (SLSB)?

    - by Harshad V
    Hello, I am using Seam with JBoss AS. In my application I have a SLSB which is also declared as a seam component using the @Name annotation. I am trying to inject and use this SLSB in another seam component using the @In annotation. My problem is that sometimes Seam injects the local interface (then the code runs fine) and sometimes seam injects the remote interface (then there is an error in execution of the code). I have tried doing all the things specified on this link: http://docs.jboss.org/seam/2.2.0.GA/reference/en-US/html/configuration.html#config.integration.ejb.container The SeamInterceptor is configured, I have specified the jndi pattern in components.xml file ( < core:init debug="true" jndi-pattern="earName/#{ejbName}/local"/ ), I have also tried using the @JndiName("earName/ejbName/local") annotation for every SLSB, I have tried setting this property ( org.jboss.seam.core.init.jndiPattern=earName/#{ejbName}/local ) in the seam.properties file. I have also tried putting the text below in web.xml file <context-param> <param-name>org.jboss.seam.core.init.jndiPattern</param-name> <param-value>earName/#{ejbName}/local</param-value> </context-param> Even after doing all the above mentioned things, the seam still injects the remote interface sometimes. Am I missing something here? Can anyone tell me how to resolve this issue and tell seam to always inject the local interface? My components.xml file looks like: <?xml version="1.0" encoding="UTF-8"?> <components xmlns="http://jboss.com/products/seam/components" xmlns:core="http://jboss.com/products/seam/core" xmlns:persistence="http://jboss.com/products/seam/persistence" xmlns:drools="http://jboss.com/products/seam/drools" xmlns:bpm="http://jboss.com/products/seam/bpm" xmlns:security="http://jboss.com/products/seam/security" xmlns:mail="http://jboss.com/products/seam/mail" xmlns:web="http://jboss.com/products/seam/web" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation= "http://jboss.com/products/seam/core http://jboss.com/products/seam/core-2.1.xsd http://jboss.com/products/seam/persistence http://jboss.com/products/seam/persistence-2.1.xsd http://jboss.com/products/seam/drools http://jboss.com/products/seam/drools-2.1.xsd http://jboss.com/products/seam/bpm http://jboss.com/products/seam/bpm-2.1.xsd http://jboss.com/products/seam/security http://jboss.com/products/seam/security-2.1.xsd http://jboss.com/products/seam/mail http://jboss.com/products/seam/mail-2.1.xsd http://jboss.com/products/seam/web http://jboss.com/products/seam/web-2.1.xsd http://jboss.com/products/seam/components http://jboss.com/products/seam/components-2.1.xsd"> <core:init debug="true" jndi-pattern="myEarName/#{ejbName}/local"/> <core:manager concurrent-request-timeout="500" conversation-timeout="120000" conversation-id-parameter="cid" parent-conversation-id-parameter="pid"/> <web:hot-deploy-filter url-pattern="*.seam"/> <persistence:managed-persistence-context name="entityManager" auto-create="true" persistence-unit-jndi-name="@puJndiName@"/> <drools:rule-base name="securityRules"> <drools:rule-files> <value>/security.drl</value> </drools:rule-files> </drools:rule-base> <security:rule-based-permission-resolver security-rules="#{securityRules}"/> <security:identity authenticate-method="#{authenticator.authenticate}" remember-me="true"/> <event type="org.jboss.seam.security.notLoggedIn"> <action execute="#{redirect.captureCurrentView}"/> </event> <event type="org.jboss.seam.security.loginSuccessful"> <action execute="#{redirect.returnToCapturedView}"/> </event> <component name="org.jboss.seam.core.init"> <property name="jndiPattern">myEarName/#{ejbName}/local</property> </component> </components> And my EJB component looks like: @Stateless @Name("myEJBComponent") @AutoCreate public class MyEJBComponentImpl implements MyEJBComponentRemote, MyEJBComponentLocal { public void doSomething() { } }

    Read the article

  • ASP.net FFMPEG video conversion receiving error: "Error number -2 occurred"

    - by Pete
    Hello, I am attempting to integrate FFMPEG into my asp.net website. The process I am trying to complete is to upload a video, check if it is .avi, .mov, or .wmv and then convert this video into an mp4 using x264 so my flash player can play it. I am using an http handler (ashx) file to handle my upload. This is where I am also putting my conversion code. I am not sure if this is the best place to put it, but I wanted to see if i could at least get it working. Additionally, I was able to complete the conversion manually through cmd line. The error -2 comes up when i output the standard error from the process I executed. This is the error i receive: FFmpeg version SVN-r23001, Copyright (c) 2000-2010 the FFmpeg developers built on May 1 2010 06:06:15 with gcc 4.4.2 configuration: --enable-memalign-hack --cross-prefix=i686-mingw32- --cc=ccache-i686-mingw32-gcc --arch=i686 --target-os=mingw32 --enable-runtime-cpudetect --enable-avisynth --enable-gpl --enable-version3 --enable-bzlib --enable-libgsm --enable-libfaad --enable-pthreads --enable-libvorbis --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid --enable-libschroedinger --enable-libx264 --enable-libopencore_amrwb --enable-libopencore_amrnb libavutil 50.15. 0 / 50.15. 0 libavcodec 52.66. 0 / 52.66. 0 libavformat 52.61. 0 / 52.61. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.10. 0 / 0.10. 0 532010_Robotica_720.wmv: Error number -2 occurred here is the code below: <%@ WebHandler Language="VB" Class="upload" %> Imports System Imports System.Web Imports System.IO Imports System.Diagnostics Imports System.Threading Public Class upload : Implements IHttpHandler Public currentTime As System.DateTime Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest currentTime = System.DateTime.Now If (Not context.Request.Files("Filedata") Is Nothing) Then Dim file As HttpPostedFile : file = context.Request.Files("Filedata") Dim targetDirectory As String : targetDirectory = HttpContext.Current.Server.MapPath(context.Request("folder")) Dim targetFilePath As String : targetFilePath = Path.Combine(targetDirectory, currentTime.Month & currentTime.Day & currentTime.Year & "_" & file.FileName) Dim fileNameArray As String() fileNameArray = Split(file.FileName, ".") If (System.IO.File.Exists(targetFilePath)) Then System.IO.File.Delete(targetFilePath) End If file.SaveAs(targetFilePath) Select Case fileNameArray(UBound(fileNameArray)) Case "avi", "mov", "wmv" Dim fileargs As String = fileargs = "-y -i " & currentTime.Month & currentTime.Day & currentTime.Year & "_" & file.FileName & " -ab 96k -vcodec libx264 -vpre normal -level 41 " fileargs += "-crf 25 -bufsize 20000k -maxrate 25000k -g 250 -r 20 -s 900x506 -coder 1 -flags +loop " fileargs += "-cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 7 -me_range 16 -keyint_min 25 " fileargs += "-sc_threshold 40 -i_qfactor 0.71 -rc_eq 'blurCplx^(1-qComp)' -bf 16 -b_strategy 1 -bidir_refine 1 " fileargs += "-refs 6 -deblockalpha 0 -deblockbeta 0 -f mp4 " & currentTime.Month & currentTime.Day & currentTime.Year & "_" & file.FileName & ".mp4" Dim proc As New Diagnostics.Process() proc.StartInfo.FileName "ffmpeg.exe" proc.StartInfo.Arguments = fileargs proc.StartInfo.UseShellExecute = False proc.StartInfo.CreateNoWindow = True proc.StartInfo.RedirectStandardOutput = True proc.StartInfo.RedirectStandardError = True AddHandler proc.OutputDataReceived, AddressOf SaveTextToFile proc.Start() SaveTextToFile2(proc.StandardError.ReadToEnd()) proc.WaitForExit() proc.Close() End Select Catch ex As System.IO.IOException Thread.Sleep(2000) GoTo Conversion Finally context.Response.Write("1") End Try End If End Sub Public ReadOnly Property IsReusable() As Boolean Implements IHttpHandler.IsReusable Get Return False End Get End Property Private Shared Sub SaveTextToFile(ByVal sendingProcess As Object, ByVal strData As DataReceivedEventArgs) Dim FullPath As String = "text.txt" Dim Contents As String = "" Dim objReader As StreamWriter objReader = New StreamWriter(FullPath) If Not String.IsNullOrEmpty(strData.Data) Then objReader.Write(Environment.NewLine + strData.Data) End If objReader.Close() End Sub Private Sub SaveTextToFile2(ByVal strData As String) Dim FullPath As String = "texterror.txt" Dim Contents As String = "" Dim objReader As StreamWriter objReader = New StreamWriter(FullPath) objReader.Write(Environment.NewLine + strData) objReader.Close() End Sub End Class

    Read the article

  • Streaming a webcam from Silverlight 4 (Beta)

    - by Ken Smith
    The new webcam stuff in Silverlight 4 is darned cool. By exposing it as a brush, it allows scenarios that are way beyond anything that Flash has. At the same time, accessing the webcam locally seems like it's only half the story. Nobody buys a webcam so they can take pictures of themselves and make funny faces out of them. They buy a webcam because they want other people to see the resulting video stream, i.e., they want to stream that video out to the Internet, a lay Skype or any of the dozens of other video chat sites/applications. And so far, I haven't figured out how to do that with It turns out that it's pretty simple to get a hold of the raw (Format32bppArgb formatted) bytestream, as demonstrated here. But unless we want to transmit that raw bytestream to a server (which would chew up way too much bandwidth), we need to encode that in some fashion. And that's more complicated. MS has implemented several codecs in Silverlight, but so far as I can tell, they're all focused on decoding a video stream, not encoding it in the first place. And that's apart from the fact that I can't figure out how to get direct access to, say, the H.264 codec in the first place. There are a ton of open-source codecs (for instance, in the ffmpeg project here), but they're all written in C, and they don't look easy to port to C#. Unless translating 10000+ lines of code that look like this is your idea of fun :-) const int b_xy= h->mb2b_xy[left_xy[i]] + 3; const int b8_xy= h->mb2b8_xy[left_xy[i]] + 1; *(uint32_t*)h->mv_cache[list][cache_idx ]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[0+i*2]]; *(uint32_t*)h->mv_cache[list][cache_idx+8]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[1+i*2]]; h->ref_cache[list][cache_idx ]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[0+i*2]>>1)]; h->ref_cache[list][cache_idx+8]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[1+i*2]>>1)]; The mooncodecs folder within the Mono project (here) has several audio codecs in C# (ADPCM and Ogg Vorbis), and one video codec (Dirac), but they all seem to implement just the decode portion of their respective formats, as do the java implementations from which they were ported. I found a C# codec for Ogg Theora (csTheora, http://www.wreckedgames.com/forum/index.php?topic=1053.0), but again, it's decode only, as is the jheora codec on which it's based. Of course, it would presumably be easier to port a codec from Java than from C or C++, but the only java video codecs that I found were decode-only (such as jheora, or jirac). So I'm kinda back at square one. It looks like our options for hooking up a webcam (or microphone) through Silverlight to the Internet are: (1) Wait for Microsoft to provide some guidance on this; (2) Spend the brain cycles porting one of the C or C++ codecs over to Silverlight-compatible C#; (3) Send the raw, uncompressed bytestream up to a server (or perhaps compressed slightly with something like zlib), and then encode it server-side; or (4) Wait for someone smarter than me to figure this out and provide a solution. Does anybody else have any better guidance? Have I missed something that's just blindingly obvious to everyone else? (For instance, does Silverlight 4 somewhere have some classes I've missed that take care of this?)

    Read the article

  • MXMLC Ant task results in java.lang.OutOFMemoryError

    - by Mims H. Wright
    I'm making a change to a set of code for a Flex project that I didn't write and was set up to compile using ant tasks. I assume that the codebase was stable at the last checkin but I'm running into memory issues when trying to build a project using MXMLC and ant (see stack trace below). Before, I was just getting an out of memory error. I tried using a different machine and got this more verbose exception (including problems with the image fetcher). I've tried using various versions of the SDK, I've tried replacing the <mxmlc> tag with <exec executable="mxmlc"> with no luck. Here is my java version in case that has anything to do with it: » java -version java version "1.6.0_20" Java(TM) SE Runtime Environment (build 1.6.0_20-b02-279-10M3065) Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01-279, mixed mode) Any help would be appreciated. Thanks! Buildfile: build.xml compileSWF: [echo] Compiling main.swf... [mxmlc] Loading configuration file /Applications/Adobe Flash Builder 4 Plug-in/sdks/4.0.0beta2/frameworks/flex-config.xml [mxmlc] Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space [mxmlc] at java.awt.image.PixelGrabber.setDimensions(PixelGrabber.java:360) [mxmlc] at sun.awt.image.ImageDecoder.setDimensions(ImageDecoder.java:62) [mxmlc] at sun.awt.image.JPEGImageDecoder.sendHeaderInfo(JPEGImageDecoder.java:71) [mxmlc] at sun.awt.image.JPEGImageDecoder.readImage(Native Method) [mxmlc] at sun.awt.image.JPEGImageDecoder.produceImage(JPEGImageDecoder.java:119) [mxmlc] at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) [mxmlc] at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) [mxmlc] at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) [mxmlc] /src/com/amtrak/components/map/MapAsset.mxml: Error: exception during transcoding: Failed to grab pixels for image /src/assets/embed_assets/images/zoomed_map_wide.jpg [mxmlc] [mxmlc] /src/com/amtrak/components/map/MapAsset.mxml: Error: Unable to transcode /assets/embed_assets/images/zoomed_map_wide.jpg. [mxmlc] [mxmlc] Error: Java heap space [mxmlc] [mxmlc] java.lang.OutOfMemoryError: Java heap space [mxmlc] at java.util.ArrayList.<init>(ArrayList.java:112) [mxmlc] at macromedia.asc.util.ObjectList.<init>(ObjectList.java:30) [mxmlc] at macromedia.asc.parser.ArgumentListNode.<init>(ArgumentListNode.java:30) [mxmlc] at macromedia.asc.parser.NodeFactory.argumentList(NodeFactory.java:116) [mxmlc] at macromedia.asc.parser.NodeFactory.argumentList(NodeFactory.java:97) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateBinding(ImplementationGenerator.java:563) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateBindingsSetupFunction(ImplementationGenerator.java:864) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateBindingsSetup(ImplementationGenerator.java:813) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateInitializerSupportDefs(ImplementationGenerator.java:1813) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.generateClassDefinition(ImplementationGenerator.java:1005) [mxmlc] at flex2.compiler.mxml.ImplementationGenerator.<init>(ImplementationGenerator.java:201) [mxmlc] at flex2.compiler.mxml.ImplementationCompiler.generateImplementationAST(ImplementationCompiler.java:498) [mxmlc] at flex2.compiler.mxml.ImplementationCompiler.parse1(ImplementationCompiler.java:196) [mxmlc] at flex2.compiler.mxml.MxmlCompiler.parse1(MxmlCompiler.java:168) [mxmlc] at flex2.compiler.CompilerAPI.parse1(CompilerAPI.java:2851) [mxmlc] at flex2.compiler.CompilerAPI.parse1(CompilerAPI.java:2804) [mxmlc] at flex2.compiler.CompilerAPI.batch2(CompilerAPI.java:446) [mxmlc] at flex2.compiler.CompilerAPI.batch(CompilerAPI.java:1274) [mxmlc] at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1488) [mxmlc] at flex2.compiler.CompilerAPI.compile(CompilerAPI.java:1375) [mxmlc] at flex2.tools.Mxmlc.mxmlc(Mxmlc.java:282) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [mxmlc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [mxmlc] at java.lang.reflect.Method.invoke(Method.java:597) [mxmlc] at flex.ant.FlexTask.executeInProcess(FlexTask.java:280) [mxmlc] at flex.ant.FlexTask.execute(FlexTask.java:225) [mxmlc] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [mxmlc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [mxmlc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [mxmlc] at java.lang.reflect.Method.invoke(Method.java:597) BUILD FAILED /src/build.xml:49: mxmlc task failed

    Read the article

  • ADO.NET (WCF) Data Services Query Interceptor Hangs IIS

    - by PreMagination
    I have an ADO.NET Data Service that's supposed to provide read-only access to a somewhat complex database. Logically I have table-per-type (TPT) inheritance in my data model but the EDM doesn't implement inheritance. (Limitation of EF and navigation properties on derived types. STILL not fixed in EF4!) I can query my EDM directly (using a separate project) using a copy of the query I'm trying to run against the web service, results are returned within 10 seconds. Disabling the query interceptors I'm able to make the same query against the web service, results are returned similarly quickly. I can enable some of the query interceptors and the results are returned slowly, up to a minute or so later. Alternatively, I can enable all the query interceptors, expand less of the properties on the main object I'm querying, and results are returned in a similar period of time. (I've increased some of the timeout periods) Up til this point Sql Profiler indicates the slow-down is the database. (That's a post for a different day) But when I enable all my query interceptors and expand all the properties I'd like to have the IIS worker process pegs the CPU for 20 minutes and a query is never even made against the database. This implies to me that yes, my implementation probably sucks but regardless the Data Services "tier" is having an issue it shouldn't. WCF tracing didn't reveal anything interesting to my untrained eye. Details: Data model: Agent-Person-Student Student has a collection of referrals Students and referrals are private, queries against the web service should only return "your" students and referrals. This means Person and Agent need to be filtered too. Other entities (Agent-Organization-School) can be accessed by anyone who has authenticated. The existing security model is poorly suited to perform this type of filtering for this type of data access, the query interceptors are complicated and cause EF to generate some entertaining sql queries. Sample Interceptor [QueryInterceptor("Agents")] public Expression<Func<Agent, Boolean>> OnQueryAgents() { //Agent is a Person(1), Educator(2), Student(3), or Other Person(13); allow if scope permissions exist return ag => (ag.AgentType.AgentTypeId == 1 || ag.AgentType.AgentTypeId == 2 || ag.AgentType.AgentTypeId == 3 || ag.AgentType.AgentTypeId == 13) && ag.Person.OrganizationPersons.Count<OrganizationPerson>(op => op.Organization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124) || op.Organization.HierarchyDescendents.Any<OrganizationsHierarchy>(oh => oh.AncestorOrganization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124))) > 0; } The query interceptors for Person, Student, Referral are all very similar, ie they traverse multiple same/similar tables to look for ScopePermissions as above. Sample Query var referrals = (from r in service.Referrals .Expand("Organization/ParentOrganization") .Expand("Educator/Person/Agent") .Expand("Student/Person/Agent") .Expand("Student") .Expand("Grade") .Expand("ProblemBehavior") .Expand("Location") .Expand("Motivation") .Expand("AdminDecision") .Expand("OthersInvolved") where r.DateCreated >= coupledays && r.DateDeleted == null select r); Any suggestions or tips would be greatly associated, for fixing my current implementation or in developing a new one, with the caveat that the database can't be changed and that ultimately I need to expose a large portion of the database via a web service that limits data access to the data authorized for, for the purpose of data integration with multiple outside parties. THANK YOU!!!

    Read the article

  • JSF: Cannot catch ViewExpiredException

    - by ifischer
    I'm developing a JSF 2.0 application on Glassfish v3 and i'm trying to handle the ViewExpiredException. But whatever i do, i always get a Glassfish error report instead of my own error page. To simulate the occurrence of the VEE, i inserted the following function into my backing bean, which fires the VEE. I'm triggering this function from my JSF page through a commandLink. The Code: @Named public class PersonHome { (...) public void throwVEE() { throw new ViewExpiredException(); } } At first i tried it by simply adding an error-page to my web.xml: <error-page> <exception-type>javax.faces.application.ViewExpiredException</exception-type> <location>/error.xhtml</location> </error-page> But this doesn't work, i'm not redirected to error but i'm shown the Glassfish errorpage, which shows a HTTP Status 500 page with the following content: description:The server encountered an internal error () that prevented it from fulfilling this request. exception: javax.servlet.ServletException: javax.faces.application.ViewExpiredException root cause: javax.faces.el.EvaluationException:javax.faces.application.ViewExpiredException root cause:javax.faces.application.ViewExpiredException Next thing i tried was to write ExceptionHandlerFactory and a CustomExceptionHandler, as described in JavaServerFaces 2.0 - The Complete Reference. So i inserted the following tag into faces-config.xml: <factory> <exception-handler-factory> exceptions.ExceptionHandlerFactory </exception-handler-factory> </factory> And added these classes: The factory: package exceptions; import javax.faces.context.ExceptionHandler; public class ExceptionHandlerFactory extends javax.faces.context.ExceptionHandlerFactory { private javax.faces.context.ExceptionHandlerFactory parent; public ExceptionHandlerFactory(javax.faces.context.ExceptionHandlerFactory parent) { this.parent = parent; } @Override public ExceptionHandler getExceptionHandler() { ExceptionHandler result = parent.getExceptionHandler(); result = new CustomExceptionHandler(result); return result; } } The custom exception handler: package exceptions; import java.util.Iterator; import javax.faces.FacesException; import javax.faces.application.NavigationHandler; import javax.faces.application.ViewExpiredException; import javax.faces.context.ExceptionHandler; import javax.faces.context.ExceptionHandlerWrapper; import javax.faces.context.FacesContext; import javax.faces.event.ExceptionQueuedEvent; import javax.faces.event.ExceptionQueuedEventContext; class CustomExceptionHandler extends ExceptionHandlerWrapper { private ExceptionHandler parent; public CustomExceptionHandler(ExceptionHandler parent) { this.parent = parent; } @Override public ExceptionHandler getWrapped() { return this.parent; } @Override public void handle() throws FacesException { for (Iterator<ExceptionQueuedEvent> i = getUnhandledExceptionQueuedEvents().iterator(); i.hasNext();) { ExceptionQueuedEvent event = i.next(); System.out.println("Iterating over ExceptionQueuedEvents. Current:" + event.toString()); ExceptionQueuedEventContext context = (ExceptionQueuedEventContext) event.getSource(); Throwable t = context.getException(); if (t instanceof ViewExpiredException) { ViewExpiredException vee = (ViewExpiredException) t; FacesContext fc = FacesContext.getCurrentInstance(); NavigationHandler nav = fc.getApplication().getNavigationHandler(); try { // Push some useful stuff to the flash scope for // use in the page fc.getExternalContext().getFlash().put("expiredViewId", vee.getViewId()); nav.handleNavigation(fc, null, "/login?faces-redirect=true"); fc.renderResponse(); } finally { i.remove(); } } } // At this point, the queue will not contain any ViewExpiredEvents. // Therefore, let the parent handle them. getWrapped().handle(); } } But STILL i'm NOT redirected to my error page - i'm getting the same HTTP 500 error like above. What am i doing wrong, what could be missing in my implementation so that the exception is handled correctly?

    Read the article

  • Cisco PIX 8.0.4, static address mapping not working?

    - by Bill
    upgrading a working Pix running 5.3.1 to 8.0.4. The memory/IOS upgrade went fine, but the 8.0.4 configuration is not quite working 100%. The 5.3.1 config on which it was based is working fine. Basically, I have three networks (inside, outside, dmz) with some addresses on the dmz statically mapped to outside addresses. The problem seems to be that those addresses can't send or receive traffic from the outside (Internet.) Stuff on the DMZ that does not have a static mapping seems to work fine. So, basically: Inside - outside: works Inside - DMZ: works DMZ - inside: works, where the rules allow it DMZ (non-static) - outside: works But: DMZ (static) - outside: fails Outside - DMZ: fails (So, say, udp 1194 traffic to .102, http to .104) I suspect there's something I'm missing with the nat/global section of the config, but can't for the life of me figure out what. Help, anyone? The complete configuration is below. Thanks for any thoughts! ! PIX Version 8.0(4) ! hostname firewall domain-name asasdkpaskdspakdpoak.com enable password xxxxxxxx encrypted passwd xxxxxxxx encrypted names ! interface Ethernet0 nameif outside security-level 0 ip address XX.XX.XX.100 255.255.255.224 ! interface Ethernet1 nameif inside security-level 100 ip address 192.168.68.1 255.255.255.0 ! interface Ethernet2 nameif dmz security-level 10 ip address 192.168.69.1 255.255.255.0 ! boot system flash:/image.bin ftp mode passive dns server-group DefaultDNS domain-name asasdkpaskdspakdpoak.com access-list acl_out extended permit udp any host XX.XX.XX.102 eq 1194 access-list acl_out extended permit tcp any host XX.XX.XX.104 eq www access-list acl_dmz extended permit tcp host 192.168.69.10 host 192.168.68.17 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq 5901 access-list acl_dmz extended permit udp host 192.168.69.103 any eq ntp access-list acl_dmz extended permit udp host 192.168.69.103 any eq domain access-list acl_dmz extended permit tcp host 192.168.69.103 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8080 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8099 access-list acl_dmz extended permit tcp host 192.168.69.105 any eq www access-list acl_dmz extended permit tcp host 192.168.69.103 any eq smtp access-list acl_dmz extended permit tcp host 192.168.69.105 host 192.168.68.103 eq ssh access-list acl_dmz extended permit tcp host 192.168.69.104 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq https pager lines 24 mtu outside 1500 mtu inside 1500 mtu dmz 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 static (dmz,outside) XX.XX.XX.103 192.168.69.11 netmask 255.255.255.255 static (inside,dmz) 192.168.68.17 192.168.68.17 netmask 255.255.255.255 static (inside,dmz) 192.168.68.100 192.168.68.100 netmask 255.255.255.255 static (inside,dmz) 192.168.68.101 192.168.68.101 netmask 255.255.255.255 static (inside,dmz) 192.168.68.102 192.168.68.102 netmask 255.255.255.255 static (inside,dmz) 192.168.68.103 192.168.68.103 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.104 192.168.69.100 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.105 192.168.69.105 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.102 192.168.69.10 netmask 255.255.255.255 access-group acl_out in interface outside access-group acl_dmz in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.97 1 route dmz 10.71.83.0 255.255.255.0 192.168.69.10 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 192.168.68.17 255.255.255.255 inside telnet timeout 5 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global prompt hostname context Cryptochecksum:2d1bb2dee2d7a3e45db63a489102d7de

    Read the article

  • Simple animation using C#/Windows Forms

    - by clintp
    I need to knock out a quick animation in C#/Windows Forms for a Halloween display. Just some 2D shapes moving about on a solid background. Since this is just a quick one-off project I really don't want to install and learn an entire new set of tools for this. (DirectX dev kits, Silverlight, Flash, etc..) I also have to install this on multiple computers so anything beyond the basic .Net framework (2.0) would be a pain in the arse. For tools I've got VS2k8, 25 years of development experience, a wheelbarrow, holocaust cloak, and about 2 days to knock this out. I haven't done animation since using assembler on my Atari 130XE (hooray for page flipping and player/missile graphics!) Advice? Here's some of the things I'd like to know: I can draw on any empty widget (like a panel) by fiddling with it's OnPaint handler, right? That's how I'd draw a custom widget. Is there a better technique than this? Is there a page-flipping technique for this kind of thing in Windows Forms? I'm not looking for a high frame rate, just as little flicker/drawing as necessary. Thanks. Post Mortem Edit ... "a couple of coding days later" Well, the project is done. The links below came in handy although a couple of them were 404. (I wish SO would allow more than one reply to be marked "correct"). The biggest problem I had to overcome was flickering, and a persistent bug when I tried to draw on the form directly. Using the OnPaint event for the Form: bad idea. I never got that to work; lots of mysterious errors (stack overflows, or ArgumentNullExceptions). I wound up using a panel sized to fill the form and that worked fine. Using the OnPaint method is slow anyway. Somewhere online I read that building the PaintEventArgs was slow, and they weren't kidding. Lots of flickering went away when I abandoned this. Skip the OnPaint/Invalidate() and just paint it yourself. Setting all of the "double buffering" options on the form still left some flicker that had to be fixed. (And I found conflicting docs that said "set them on the control" and "set them on the form". Well controls don't have a .SetStyle() method.) I haven't tested without them, so they might be doing something (this is the form): this.SetStyle(ControlStyles.UserPaint, true); this.SetStyle(ControlStyles.OptimizedDoubleBuffer, true); this.SetStyle(ControlStyles.AllPaintingInWmPaint, true); So the workhorse of the code wound up looking like (pf is the panel control): void PaintPlayField() { Bitmap bufl = new Bitmap(pf.Width, pf.Height); using (Graphics g = Graphics.FromImage(bufl)) { g.FillRectangle(Brushes.Black, new Rectangle(0, 0, pf.Width, pf.Height)); DrawItems(g); DrawMoreItems(g); pf.CreateGraphics().DrawImageUnscaled(bufl, 0, 0); } } And I just called PaintPlayField from the inside of my Timer loop. No flicker at all.

    Read the article

  • RAILS : authlogic authenication / session error , "session contains objects whose class definition i

    - by Surya
    Session contains objects whose class definition isn\'t available. Remember to require the classes for all objects kept in the session I am trying to integrate http://github.com/binarylogic/authlogic for authentication into my rails application . I follwed all the steps into mentioned in the documentation . Now i seem to be getting this error when i hit a controller . Looks like i am missing something obvious . stacktrace /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:77:in `stale_session_check!' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in `load!' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in `[]' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/session.rb:48:in `session_credentials' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/session.rb:33:in `persist_by_session' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `evaluate_method' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:166:in `call' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:93:in `run' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `each' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `run' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:276:in `run_callbacks' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/callbacks.rb:79:in `persist' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/persistence.rb:55:in `persisting?' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/persistence.rb:39:in `find' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:12:in `current_user_session' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:17:in `current_user' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:30:in `require_no_user' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `evaluate_method' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:166:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:225:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:629:in `run_before_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:615:in `call_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in `realtime' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in `perform_action_without_flash' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in `perform_action' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `send' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `process_without_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in `process' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in `process' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in `dispatch' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in `_call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in `build_middleware_stack' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /Users/suryagaddipati/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call'

    Read the article

  • Android webview doesn't playing mp4 video in same page

    - by user1668105
    I'm trying to show one local html file contains code for playing video and try to show that html file in android webview. I've used following code to for playing video: WebViewLoadVideoActivity.java //DECLARE webview variable outside of onCreate function so we can access it in other functions (menu) @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView webView = (WebView) findViewById(R.id.webView1); WebSettings webSettings = webView.getSettings(); // Fetches the // WebSettings // import WebViewClient webViewClient = new WebViewClient(); webView.setWebViewClient(webViewClient); // Enabling mp4 webSettings.setPluginsEnabled(true); // Allows plugins to run which are // normally disabled in webView webView.getSettings().setBuiltInZoomControls(true); // Allows the // Android built in // zoom control webView.getSettings().setSaveFormData(true); webView.getSettings().setLoadsImagesAutomatically(true); webView.getSettings().setPluginsEnabled(true); webView.getSettings().setLoadsImagesAutomatically(true); webView.getSettings().setSupportMultipleWindows(true); webView.getSettings().setPluginsEnabled(true); webView.getSettings().setLightTouchEnabled(true); webView.getSettings().setAllowFileAccess(true); // To allow file // downloads/streams // such as mp4, mpeg, // and 3gp files webView.getSettings().setJavaScriptEnabled(true); // Enables HTML // Javascript to run // in webview webView.getSettings().setJavaScriptCanOpenWindowsAutomatically(true); webView.getSettings().setSupportZoom(true); // Support the zoom feature webView.getSettings().setSavePassword(true); // Allow users to save passwords in forms webView.setWebViewClient(new WebViewClient() { // Opens web links clicked by user in the webview @Override public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) { // Handle the error } @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { view.loadUrl(url); return true; } }); webView.loadUrl("file:///android_asset/test.html"); // test.html file from assets folder //... Rest of activity code... test.html <!DOCTYPE html> <html> <body> <video width="320" height="240" controls="controls"> <source src="http://www.w3schools.com/html/movie.mp4" type="video/mp4" /> Your browser does not support the video tag. </video> </body> </html> Problem Area : Android webview or Android Default Browser shows the video content in another video view when we click play button, My requirement is that video should be opened in same html page inline, so user can navigate to other page of webpage during video is playing or buffering. Research Area : I tried many other ways like, video tag of HTML5 embed tag of HTML object tag of HTML Other way of video player integration i checked so far but not worked in my requirement, Flare Video jplayer Please suggest me any way that can be suitable for my requirement, And my requirement is very simple as, I want to play video in html file as inline in webview widget of android. Thanks in Advance.

    Read the article

  • RAILS :"session contains objects whose class definition isn\'t available."

    - by Surya
    Session contains objects whose class definition isn\'t available. Remember to require the classes for all objects kept in the session I am trying to integrate http://github.com/binarylogic/authlogic for authentication into my rails application . I follwed all the steps into mentioned in the documentation . Now i seem to be getting this error when i hit a controller . Looks like i am missing something obvious . stacktrace /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:77:in `stale_session_check!' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in `load!' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in `[]' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/session.rb:48:in `session_credentials' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/session.rb:33:in `persist_by_session' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `evaluate_method' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:166:in `call' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:93:in `run' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `each' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `run' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:276:in `run_callbacks' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/callbacks.rb:79:in `persist' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/persistence.rb:55:in `persisting?' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/persistence.rb:39:in `find' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:12:in `current_user_session' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:17:in `current_user' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:30:in `require_no_user' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `evaluate_method' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:166:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:225:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:629:in `run_before_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:615:in `call_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in `realtime' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in `perform_action_without_flash' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in `perform_action' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `send' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `process_without_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in `process' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in `process' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in `dispatch' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in `_call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in `build_middleware_stack' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /Users/suryagaddipati/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call'

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >