Search Results

Search found 10921 results on 437 pages for 'latex environment'.

Page 145/437 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Meet Matthijs, Dutch Inside Sales Representative for Oracle Direct

    - by Maria Sandu
    Today we would like to share some information around the Dutch Core Technology team in Malaga. Matthijs is one of the team members who decided to relocate from the Netherlands to Malaga to join Oracle Direct two years ago. Matthijs: “For the past two years I have been working as an Oracle Direct Core Technology Inside Sales representative for Named Accounts in the Netherlands, based in Malaga, Spain. In my case, working for the Dutch OD Core Technology team means that I am responsible for the Account Management of Larger companies in the Travel & Transportation and the Manufacturing, Retail & Distribution sector. I work together with the Oracle Field Account Managers and our Field Sales Management in the Netherlands where I am often the main point of contact for customers. This means that I deal with their requests and I manage their various issues, provide solutions and suggestions based on the Oracle Core Technology portfolio. I work on interesting projects with end-customers, making financial proposals and building business cases. It is a very interesting sales environment and for the last two years I improved my skills substantially. This month I will finish my Inside Sales career in Malaga to move to a position within Field Sales in the Netherlands. Oracle Direct has proven to be a great stepping stone for my career. Boost your personal development One of the reasons for joining Oracle was to boost my personal & career development. You can choose from various different trainings to follow all over Europe which enable you to reach both your personal and professional goals. Furthermore, you can decide your own career path and plan the steps necessary to achieve your goal. Many people aim to grow into Field Sales in their native countries, Business Development or Sales Management, but there are many possibilities once you decide to join Oracle. Overall, working at Oracle means working for an international company and one of the worldwide leaders in Enterprise Hardware & Software. Here you get all the tools necessary to develop yourself personally & professionally. Another great advantage of working for Oracle Direct is working from our office in Malaga, Southern Spain where we have over 400 employees from many countries across EMEA. It is a truly international environment! Working and living in Spain gives you an excellent opportunity to learn Spanish and of course enjoy the Spanish lifestyle, cuisine, beaches and much, much more!” Interview day Utrecht If you are inspired by the story of Matthijs and would like to explore the opportunity to join the Technology Sales team for the Dutch market in Malaga, let us know! We will organise an Interview day in the Oracle office in Utrecht on the 18th and 19th of September. We currently have multiple openings in the Core Technology team that focus on selling our Database portfolio in the Dutch market. We are looking for native Dutch speakers with a Bachelors degree, 2-5 years sales experience (ideally in IT) who are willing to relocate to Malaga for at least 2 years! For more information please contact [email protected] or [email protected].

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Do Great Work

    - by user12601034
    Have you ever attended an online conference and actually had a desire to attend all of it?? Yesterday I attended the first day of the Great Work MBA program, sponsored by Box of Crayons and hosted by Michael Bungay Stanier. The topic of the day was “Grounding Yourself,” and the day featured five speakers on five different topics. I have to admit that I started the first session with kind of a “blech” feeling that I didn’t really want to participate, but for some reason I did. So I listened to the first session, and I was hooked. I ended up listening to all of the sessions for the day, and I had some great take-aways from the sessions – my highlights included: The opposite of bravery isn’t fear, it’s settling. In essence, you need to be brave in order to accomplish anything. If you’re settling, you’re not being brave, and your accomplishments will likely be lackluster. Bravery requires confidence and permission. You need to work at being brave by taking small wins, build them up and then take slightly larger risks. Additionally, you need to “claim your own crown.” Nobody in the business world is going to give you permission to be a guru in X – you need to give yourself permission to become a guru in X and then do it. Fall in love with obstacles. Everyone is going to face some form of failure. One way to deal with this is to fall in love with solving the puzzle of obstacles. You don’t have to hit it if you can go around it. Understanding purpose brings out the best in people and the best people. As a leader, drawing in people who are passionate and highly motivated about their work creates velocity for your organization. Being clear about purpose is the first step in doing this. You must own your own story. Everything about you creates a “unique you” that is distinct from everyone else. As you take ownership of this, it becomes part of your strength. It’s not a strength if you’re running away from it. Focus on what’s right. Be aware of your tendency to interpret a situation a certain way and differentiate between helpful and unhelpful interpretations. Three questions for how to think differently: 1) Why? 2) Who says so? 3) What would happen if? These three questions can help you build alternative perspectives and options that can increase resiliency. Even though this first day was focused on “Grounding Yourself,” I see plenty of application in the corporate environment for both individuals and leaders of teams. To apply these highlights to my work environment, I would do the following: Understand the purpose – of my company, of my team and of my role on the team. If I know the purpose, I know what I need to bring to the table to make me, my team and my company successful. Declare your goals…your BEHAGS (big, hairy, audacious goals).Have the confidence to declare what you and/or your team is going to accomplish.Sure, you might have to re-state those goals down the line, but you can learn from that as well. Get creative about achieving your goals.Break down your obstacles by asking yourself what is going to stop you from achieving your goals and then, for each obstacles, ask those three questions:Why?Who says so? What would happen if? Focus on what’s right.I had a manager who asked us to write status reports every week.“Status” consisted of 1) What did I accomplish; 2) What will I accomplish next week; 3) How can my manager help me.The focus on our status report was always “what’s right”(“what’s wrong” was always a conversation at the point in time it was needed). I’m normally a skeptic of online webcasts/conferences, and I normally expect to take away maybe one or two ideas. I’m really glad, however, that I took the time to listen to all of the sessions yesterday, and I hope that my take-aways inspire you to think about how you might do great work also. --

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Gnome-shell fails to load on 12.10

    - by Githlar
    I'm usually the one answering questions, but in this I'm throughly stumped! My Setup: Ubuntu 12.10 (Dist upgrade form 12.04) ATI M96 [Mobility Radeon HD 4650] Upon the first installation of 12.10 I had all kinds of issues getting the Legacy ATI drivers to install (I guess the source for the drivers isn't kosher with kernel 3.5). So, I added the repository ppa:makson96/fglrx - which has a version of the ATI source patched to work with kernel 3.5. After installation of fglrx-legacy from that PPA, gnome-shell and all my graphics work fine... until today. The Problem I unsuspended my computer today and the screen was black (not off, the black from the gnome lock screen). I'd move my mouse/hit a key and the background would flash and then it'd go back to black. Restarted via VT1 Logged into Gnome (gnome-shell) session, but no gnome-shell! Investigation: First, I went to VT1 and tried export DISPLAY=:0;gnome-shell --replace. It appeared to work fine, switch back to X and nothing. Went back to VT1 and saw this error message: JS ERROR: !!! Exception was: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO JS ERROR: !!! message = '"Object 0x7fc748129c30 is not a subclass of (null), it's a xO"' JS ERROR: !!! fileName = '"/usr/share/gnome-shell/js/ui/tweener.js"' JS ERROR: !!! lineNumber = '218' JS ERROR: !!! stack = '"()@/usr/share/gnome-shell/js/ui/tweener.js:218 wrapper()@/usr/share/gjs-1.0/lang.js:204 ()@/usr/share/gjs-1.0/lang.js:145 ()@/usr/share/gjs-1.0/lang.js:239 init()@/usr/share/gnome-shell/js/ui/tweener.js:49 init()@/usr/share/gnome-shell/js/ui/environment.js:96 @<main>:1 "' Window manager warning: Log level 32: Execution of main.js threw exception: TypeError: Object 0x7fc748129c30 is not a subclass of (null), it's a xO Note: Everywhere it says "it's a xO", xO is actually garbled and changes every time (I'm thinking memory corruption?) This error is thrown by line 96 of /usr/share/gnome-shell/js/ui/environment.js: tweener.Init() Did a purge of fglrx-legacy, reboot, reinstall fglrx-legacy, reboot... same thing. Did a ppa-purge of ppa:gnome3-team/gnome3, and reinstalled gnome-shell and ubuntu-desktop from the standard repositores... same thing. I'm really at a loss here. I love gnome-shell and after using it for nearly a year now gnome classic just seems so archaic. Additional Information Apt log from the day I first suspended my machine (these are upgrades from the gnome3-team/gnome3 ppa and ubuntu-wine/ppa ppa): Start-Date: 2012-11-24 17:30:28 Commandline: aptdaemon role='role-commit-packages' sender=':1.618' Install: gkbd-capplet:amd64 (3.6.0-0ubuntu1), gnome-control-center-unity:amd64 (1.0-0ubuntu1~ubuntu12.10.1) Upgrade: nautilus:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), libgnome-control-center1:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), wine1.5-i386:i386 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), wine1.5:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), gnome-settings-daemon:amd64 (3.4.2-0ubuntu14, 3.6.3-0ubuntu1~ubuntu12.10.1), gnome-control-center-data:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), gnome-accessibility-themes:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), gnome-themes-standard:amd64 (3.6.0.2-0ubuntu1, 3.6.2-0ubuntu2~ubuntu12.10.1), wine1.5-amd64:amd64 (1.5.17-0ubuntu4, 1.5.18-0ubuntu1), nautilus-data:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1), gnome-control-center:amd64 (3.4.2-0ubuntu19, 3.6.3-0ubuntu6~ubuntu12.10.1), libnautilus-extension1a:amd64 (3.6.2-0ubuntu0.1~quantal1, 3.6.3-0ubuntu2~ubuntu12.10.1) End-Date: 2012-11-24 17:31:32 fglrxinfo (driver appears to be working): display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 4650 OpenGL version string: 3.3.11653 Compatibility Profile Context Does anybody have any further ideas?

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Try the Oracle Database Appliance Manager Configurator - For Fun!

    - by pwstephe-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you would like to get a first hand glimpse of how easy it is to configure an ODA, even if you don’t have access to one, it’s possible to download the Appliance Manager Configurator from the Oracle Technology Network, and run it standalone on your PC or Linux/Unix  workstation. The configurator is packaged in a zip file that contains the complete Java environment to run standalone. Once the package is downloaded and unzipped it’s simply a matter of launching it using the config command or shell depending on your runtime environment. Oracle Appliance Manager Configurator is a Java-based tool that enables you to input your deployment plan and validate your network settings before an actual deployment, or you can just preview and experiment with it. Simply download and run the configurator on a local client system which can be a Windows, Linux, or UNIX system. (For Windows launch the batch file config.bat for Linux/Unix environments, run  ./ config.sh). You will be presented with the very same dialogs and options used to configure a production ODA but on your workstation. At the end of a configurator session, you may save your deployment plan in a configuration file. If you were actually ready to deploy, you could copy this configuration file to a real ODA where the online Oracle Appliance Manager Configurator would use the contents to deploy your plan in production. You may also print the file’s content and use the printout as a checklist for setting up your production external network configuration. Be sure to use the actual production network addresses you intend to use it as this will only work correctly if your client system is connected to same network that will be used for the ODA. (This step is not necessary if you are just previewing the Configurator). This is a great way to get an introductory look at the simple and intuitive Database Appliance configuration interface and the steps to configure a system. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • MSBuild (.NET 4.0) access problems

    - by JMP
    I'm using Cruise Control .NET as my build server (Windows 2008 Server). Yesterday I upgraded my ASP.NET MVC project from VS 2008/.NET 3.5 to VS 2010/.NET 4.0. The only change I made to my ccnet.config's MSBuild task was the location of MSBuild.exe. Ever since I made that change, the build has been broken with the error: MSB4019 - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This file does, in fact, exist in the location specified (I solved a problem similar to this when setting up the build server for VS2008/.NET 3.5 by copying the files from my dev environment to my build environment). So I RDP'ed into the build machine and opened a command prompt, used MSBUILD to attempt to build my project. MSBUILD returns the error: MSB3021 - Unable to copy file "obj\debug....dll". Access to the path 'bin....dll' is denied. Since I'm running MSBUILD from the command prompt, logged in with an account that has administrative privileges, I'm assuming that MSBUILD is running with the same privileges that I have. Next, I tried to copy the file that MSBUILD was attempting to copy. In this case, I get the UAC dialog that makes me click the [Continue] button to complete the copy. I'd like to avoid installing Visual Studio 2010 on my build machine, can anyone suggest other fixes I might try?

    Read the article

  • Server cannot set status after HTTP headers have been sent IIS7.5

    - by marcinn
    Hi, Sometimes I get exception in my production environment: Process information Process ID: 3832 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information Exception type: System.Web.HttpException Exception message: Server cannot set status after HTTP headers have been sent. Request information Request URL: http://www.myulr.pl/logon Request path: /logon User host address: 10.11.9.1 User: user001 Is authenticated: True Authentication Type: Forms Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information Thread ID: 10 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.HttpResponse.set_StatusCode(Int32 value) at System.Web.HttpResponseWrapper.set_StatusCode(Int32 value) at System.Web.Mvc.HandleErrorAttribute.OnException(ExceptionContext filterContext) at System.Web.Mvc.ControllerActionInvoker.InvokeExceptionFilters(ControllerContext controllerContext, IList(1) filters, Exception exception) at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) at System.Web.Mvc.Controller.ExecuteCore() at System.Web.Mvc.MvcHandler.<>c__DisplayClass8.<BeginProcessRequest>b__4() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass1.<MakeVoidDelegate>b__0() at System.Web.Mvc.Async.AsyncResultWrapper.<>c__DisplayClass8(1).<BeginSynchronous>b__7(IAsyncResult _) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult(1).End() at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& ompletedSynchronously) I didn't noticed this error on my test environment what should I check? I am using ASP.NET MVC 2 (Release Candidate 2)

    Read the article

  • Making TeamCity integrate the Subversion build number into the assembly version.

    - by Lasse V. Karlsen
    I want to adjust the output from my TeamCity build configuration of my class library so that the produced dll files have the following version number: 3.5.0.x, where x is the subversion revision number that TeamCity has picked up. I've found that I can use the BUILD_NUMBER environment variable to get x, but unfortunately I don't understand what else I need to do. The "tutorials" I find all say "You just add this to the script", but they don't say which script, and "this" is usually referring to the AssemblyInfo task from the MSBuild Community Extensions. Do I need to build a custom MSBuild script somehow to use this? Is the "script" the same as either the solution file or the C# project file? I don't know much about the MSBuild process at all, except that I can pass a solution file directly to MSBuild, but what I need to add to "the script" is XML, and the solution file decidedly does not look like XML. So, can anyone point me to a step-by-step guide on how to make this work? This is what I ended up with: Install the MSBuild Community Tasks Edit the .csproj file of my core class library, and change the bottom so that it reads: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" /> <Target Name="BeforeBuild"> <AssemblyInfo Condition=" '$(BUILD_NUMBER)' != '' " CodeLanguage="CS" OutputFile="$(MSBuildProjectDirectory)\..\GlobalInfo.cs" AssemblyVersion="3.5.0.0" AssemblyFileVersion="$(BUILD_NUMBER)" /> </Target> <Target Name="AfterBuild"> Change all my AssemblyInfo.cs files so that they don't specify either AssemblyVersion or AssemblyFileVersion (in retrospect, I'll look into putting AssemblyVersion back) Added a link to the now global GlobalInfo.cs that is located just outside all the project Make sure this file is built once, so that I have a default file in source control This will now update GlobalInfo.cs only if the environment variable BUILD_NUMBER is set, which it is when I build through TeamCity. I opted for keeping AssemblyVersion constant, so that references still work, and only update AssemblyFileVersion, so that I can see which build a dll is from.

    Read the article

  • HintPath vs ReferencePath in Visual Studio

    - by toasteroven
    What exactly is the difference between the HintPath in a .csproj file and the ReferencePath in a .csproj.user file? We're trying to commit to a convention where dependency DLLs are in a "releases" svn repo and all projects point to a particular release. Since different developers have different folder structures, relative references won't work, so we came up with a scheme to use an environment variable pointing to the particular developer's releases folder to create an absolute reference. So after a reference is added, we manually edit the project file to change the reference to an absolute path using the environment variable. I've noticed that this can be done with both the HintPath and the ReferencePath, but the only difference I could find between them is that HintPath is resolved at build-time and ReferencePath when the project is loaded into the IDE. I'm not really sure what the ramifications of that are though. I have noticed that VS sometimes rewrites the .csproj.user and I have to rewrite the ReferencePath, but I'm not sure what triggers that. I've heard that it's best not to check in the .csproj.user file since it's user-specific, so I'd like to aim for that, but I've also heard that the HintPath-specified DLL isn't "guaranteed" to be loaded if the same DLL is e.g. located in the project's output directory. Any thoughts on this?

    Read the article

  • Spring, JPA, Hibernate, Jetty 7 Integration

    - by Jewel
    Have anyone successfully run any spring and JPA application in jetty 7? I am getting following exception. This application throws no error in jetty 6. INFO [main] org.eclipse.jetty.util.log - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.eclipse.jetty.util.log) via org.eclipse.jetty.util.log.Slf4jLog INFO [main] org.eclipse.jetty.util.log - jetty-7.1.2.v20100523 INFO [main] org.eclipse.jetty.util.log - Deployment monitor G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\contexts at interval 5 INFO [main] org.eclipse.jetty.util.log - Deployment monitor G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps at interval 5 INFO [main] org.eclipse.jetty.util.log - Deployable added: G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps\gwtrpc-spring.war INFO [main] org.eclipse.jetty.util.log - Copying WEB-INF/lib jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/WEB-INF/lib/ to C:\Documents and Settings\Jewel2\Local Settings\Temp\Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj\webinf\WEB-INF\lib INFO [main] org.eclipse.jetty.util.log - Copying WEB-INF/classes from jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/WEB-INF/classes/ to C:\Documents and Settings\Jewel2\Local Settings\Temp\Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj\webinf\WEB-INF\classes INFO [main] /gwtrpc-spring - Initializing Spring root WebApplicationContext INFO [main] org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization started INFO [main] org.springframework.web.context.support.XmlWebApplicationContext - Refreshing Root WebApplicationContext: startup date [Thu Jun 10 00:13:32 GMT+06:00 2010]; root of context hierarchy INFO [main] org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@467991: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,greetingServiceImpl,testService,testService2,taskExecutor,dataSource,entityManagerFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor#0]; root of factory hierarchy INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'taskExecutor' INFO [main] org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver INFO [main] org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Building JPA container EntityManagerFactory for persistence unit 'gwtrpc-spring-data-source' INFO [main] org.hibernate.cfg.annotations.Version - Hibernate Annotations 3.4.0.GA INFO [main] org.hibernate.cfg.Environment - Hibernate 3.3.1.GA INFO [main] org.hibernate.cfg.Environment - hibernate.properties not found INFO [main] org.hibernate.cfg.Environment - Bytecode provider name : javassist INFO [main] org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling INFO [main] org.hibernate.annotations.common.Version - Hibernate Commons Annotations 3.1.0.GA INFO [main] org.hibernate.ejb.Version - Hibernate EntityManager 3.4.0.GA INFO [main] org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: gwtrpc-spring-data-source ...] INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@467991: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,greetingServiceImpl,testService,testService2,taskExecutor,dataSource,entityManagerFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor#0]; root of factory hierarchy INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Shutting down ExecutorService 'taskExecutor' ERROR [main] org.springframework.web.context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1403) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:545) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:636) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:188) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:995) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:61) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:436) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:349) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner.start(Scanner.java:242) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:136) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:562) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:212) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.Server.doStart(Server.java:209) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1018) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:983) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.jetty.start.Main.invokeMain(Main.java:447) at org.eclipse.jetty.start.Main.start(Main.java:605) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:238) at org.eclipse.jetty.start.Main.main(Main.java:77) Caused by: javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1093) at org.hibernate.ejb.Ejb3Configuration.addClassesToSessionFactory(Ejb3Configuration.java:871) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:758) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:425) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:131) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:225) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1460) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) ... 45 more Caused by: java.lang.ClassNotFoundException: WEB-INF.classes.org.gwtrpcspring.example.server.Person at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.hibernate.util.ReflectHelper.classForName(ReflectHelper.java:135) at org.hibernate.ejb.Ejb3Configuration.classForName(Ejb3Configuration.java:1009) at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1081) ... 53 more WARN [main] org.eclipse.jetty.util.log - Failed startup of context WebAppContext@149f041@149f041/gwtrpc-spring,file:/C:/Documents and Settings/Jewel2/Local Settings/Temp/Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj/webinf/;jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/;,G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps\gwtrpc-spring.war org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1403) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:545) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:636) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:188) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:995) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:61) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:436) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:349) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner.start(Scanner.java:242) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:136) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:562) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:212) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.Server.doStart(Server.java:209) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1018) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:983) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.jetty.start.Main.invokeMain(Main.java:447) at org.eclipse.jetty.start.Main.start(Main.java:605) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:238) at org.eclipse.jetty.start.Main.main(Main.java:77) Caused by: javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1093) at org.hibernate.ejb.Ejb3Configuration.addClassesToSessionFactory(Ejb3Configuration.java:871) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:758) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:425) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:131) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:225) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1460) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) ... 45 more Caused by: java.lang.ClassNotFoundException: WEB-INF.classes.org.gwtrpcspring.example.server.Person at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.hibernate.util.ReflectHelper.classForName(ReflectHelper.java:135) at org.hibernate.ejb.Ejb3Configuration.classForName(Ejb3Configuration.java:1009) at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1081) ... 53 more INFO [main] org.eclipse.jetty.util.log - Started [email protected]:8080 And this is my applicationContext.xml file <?xml version="1.0" encoding="UTF-8"?> <beans> <context:annotation-config /> <context:component-scan base-package="org.gwtrpcspring.example.server" /> <bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"> <property name="corePoolSize" value="5" /> <property name="maxPoolSize" value="10" /> <property name="queueCapacity" value="25" /> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/test" /> <property name="username" value="jewel" /> <property name="password" value="jewel" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="org.hibernate.dialect.MySQLDialect" /> <!-- <property name="databasePlatform" value="org.hibernate.dialect.HSQLDialect" /> --> <property name="showSql" value="false" /> <property name="generateDdl" value="true" /> </bean> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans>

    Read the article

  • nhibernate : a different object with the same identifier value was already associated with the sessi

    - by frosty
    I am getting the following error when i tried and save my "Company" entity in my mvc application a different object with the same identifier value was already associated with the session: 2, of entity: I am using an IOC container private class EStoreDependencies : NinjectModule { public override void Load() { Bind<ICompanyRepository>().To<CompanyRepository>().WithConstructorArgument("session", NHibernateHelper.OpenSession()); } } My CompanyRepository public class CompanyRepository : ICompanyRepository { private ISession _session; public CompanyRepository(ISession session) { _session = session; } public void Update(Company company) { using (ITransaction transaction = _session.BeginTransaction()) { _session.Update(company); transaction.Commit(); } } } And Session Helper public class NHibernateHelper { private static ISessionFactory _sessionFactory; const string SessionKey = "MySession"; private static ISessionFactory SessionFactory { get { if (_sessionFactory == null) { var configuration = new Configuration(); configuration.Configure(); configuration.AddAssembly(typeof(UserProfile).Assembly); configuration.SetProperty(NHibernate.Cfg.Environment.ConnectionStringName, System.Environment.MachineName); _sessionFactory = configuration.BuildSessionFactory(); } return _sessionFactory; } } public static ISession OpenSession() { var context = HttpContext.Current; //.GetCurrentSession() if (context != null && context.Items.Contains(SessionKey)) { //Return already open ISession return (ISession)context.Items[SessionKey]; } else { //Create new ISession and store in HttpContext var newSession = SessionFactory.OpenSession(); if (context != null) context.Items[SessionKey] = newSession; return newSession; } } } My MVC Action [HttpPost] public ActionResult Edit(EStore.Domain.Model.Company company) { if (company.Id > 0) { _companyRepository.Update(company); _statusResponses.Add(StatusResponseHelper.Create(Constants .RecordUpdated(), StatusResponseLookup.Success)); } else { company.CreatedByUserId = currentUserId; _companyRepository.Add(company); } var viewModel = EditViewModel(company.Id, _statusResponses); return View("Edit", viewModel); }

    Read the article

  • Installing The ruby-gmail rubygem on Mac OS Snow Leopard

    - by johnnygoodman
    I'm working off these instructions: http://github.com/dcparker/ruby-gmail From the home directory I do a standard install and good stuff happens: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ sudo gem install ruby-gmail Successfully installed ruby-gmail-0.2.1 1 gem installed Installing ri documentation for ruby-gmail-0.2.1... Installing RDoc documentation for ruby-gmail-0.2.1... I head over to my ~/www dir where I run scripts that include other rubygems successfully and create a gmail directory. I create a script that includes rubygems and gmail, but does nothing else: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ pwd /Users/johnnygoodman/www/gmail Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ls test-send.rb Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ cat test-send.rb require 'rubygems' require 'gmail' I run this script and the errors begin: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ruby test-send.rb /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- mime/message (LoadError) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail/message.rb:1 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail.rb:168 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `require' from test-send.rb:2 Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ Here's my gem env: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - universal-darwin-10 - GEM PATHS: - /Library/Ruby/Gems/1.8 - /Users/johnnygoodman/.gem/ruby/1.8 - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com The path that the errors give when I run the script is not the same as the GEM PATHS given in the env output. However, I don't know how to make them match or if that's the significant thing here.

    Read the article

  • Developer certificate vs purchased certificate for WCF

    - by RemotecUk
    I understsand that if I want to use authentication in WCF then I need to install a certificate on my server which WCF will use to encrypt data passing between my server and client. For development purposes I believe I can use the makecert.exe util. to make a development certificate. What is the worst that can happen if I use this certificate on the production environment? and... Why cant I use this certificate on the production environment? and ... What is the certificate actually going to do in this scenario? [Edit: Added another question] finally... In a scenario where the website has a certificate installed to provide HTTPS support can the same certificate be used for the WCF services as well? Note on my application: Its a NetTCP client and server service. The users will log in using the same username and password which they use for the website which is passed in clear text. I would be happy to pass the u/n + p/w in cleartext to WCF but this isnt allowed by the framework and a certificate must be in place. However, I dont want to buy an certificate due to budget constraints! (Sorry for the possibly stupid question but I really dont understand this so would welcome some help with this).

    Read the article

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • Label in PyQt4 GUI not updating with every loop of FOR loop

    - by user297920
    I'm having a problem, where I wish to run several command line functions from a python program using a GUI. I don't know if my problem is specific to PyQt4 or if it has to do with my bad use of python code. What I wish to do is have a label on my GUI change its text value to inform the user which command is being executed. My problem however, arises when I run several commands using a for loop. I would like the label to update itself with every loop, however, the program is not updating the GUI label with every loop, instead, it only updates itself once the entire loop is completed, and displays only the last command that was executed. I am using PyQt4 for my GUI environment. And I have established that the text variable for the label is indeed being updated with every loop, but, it is not actually showing up visually in the GUI. Is there a way for me to force the label to update itself? I have tried the update() and repaint() methods within the loop, but they don't make any difference. I would really appreciate any help. Thank you. Ronny. Here is the code I am using: # -*- coding: utf-8 -*- import sys, os from PyQt4 import QtGui, QtCore Gui = QtGui Core = QtCore # ================================================== CREATE WINDOW OBJECT CLASS class Win(Gui.QWidget): def __init__(self, parent = None): Gui.QWidget.__init__(self, parent) # --------------------------------------------------- SETUP PLAY BUTTON self.but1 = Gui.QPushButton("Run Commands",self) self.but1.setGeometry(10,10, 200, 100) # -------------------------------------------------------- SETUP LABELS self.label1 = Gui.QLabel("No Commands running", self) self.label1.move(10, 120) # ------------------------------------------------------- SETUP ACTIONS self.connect(self.but1, Core.SIGNAL("clicked()"), runCommands) # ======================================================= RUN COMMAND FUNCTION def runCommands(): for i in commands: win.label1.setText(i) # Make label display the command being run print win.label1.text() # This shows that the value is actually # changing with every loop, but its just not # being reflected in the GUI label os.system(i) # ======================================================================== MAIN # ------------------------------------------------------ THE TERMINAL COMMANDS com1 = "espeak 'senntence 1'" com2 = "espeak 'senntence 2'" com3 = "espeak 'senntence 3'" com4 = "espeak 'senntence 4'" com5 = "espeak 'senntence 5'" commands = (com1, com2, com3, com4, com5) # --------------------------------------------------- SETUP THE GUI ENVIRONMENT app = Gui.QApplication(sys.argv) win = Win() win.show() sys.exit(app.exec_())

    Read the article

  • How do I use PerformanceCounterType AverageTimer32?

    - by Patrick J Collins
    I'm trying to measure the time it takes to execute a piece of code on my production server. I'd like to monitor this information in real time, so I decided to give Performance Analyser a whizz. I understand from MSDN that I need to create both an AverageTimer32 and an AverageBase performance counter, which I duly have. I increment the counter in my program, and I can see the CallCount go up and down, but the AverageTime is always zero. What am I doing wrong? Thanks! Here's a snippit of code : long init_call_time = Environment.TickCount; // *** // Lots and lots of code... // *** // Count number of calls PerformanceCounter perf = new PerformanceCounter("Cat", "CallCount", "Instance", false); perf.Increment(); perf.Close(); // Count execution time PerformanceCounter perf2 = new PerformanceCounter("Cat", "CallTime", "Instance", false); perf2.NextValue(); perf2.IncrementBy(Environment.TickCount - init_call_time); perf2.Close(); // Average base for execution time PerformanceCounter perf3 = new PerformanceCounter("Cat", "CallTimeBase", "Instance", false); perf3.Increment(); perf3.Close(); perf2.NextValue();

    Read the article

  • Problem using FtpWebRequest to append to file on a mainframe

    - by MusiGenesis
    I am using FtpWebRequest to append data to a mainframe file. Each record appended is 50 characters long, and I am adding them one record at a time. In our development environment, we do not have a mainframe, so my code was written and tested FTPing to a Windows-based FTP site instead of a mainframe. Initially, I was writing each record using a StreamWriter (using the stream from the FtpWebRequest) and writing each record using WriteLine (which automatically adds a CR/LF to the end). When we ran this for the first time in the test environment (in which we're writing to an actual MVS mainframe), our mainframe contact said the CR/LFs were not able to be read by his program (a green-screen mainframe program of some sort - he's sent me screen captures, which is all I know of it). I changed our code to use Write instead of WriteLine, but now my code executes successfully (i.e no thrown exceptions) when writing multiple records, but no matter how many records we append, he is only able to "see" the first record - according to his mainframe program, there is only one 50-character record in the file. I'm guessing that to fix this, I need to write some other line-delimiting character into the end of the stream (instead of CR/LF) that the mainframe will recognize as a record delimiter. Anybody know what this is, or how else I can fix this problem?

    Read the article

  • Rspec2, Rails3, Authlogic: Can't run specs

    - by Sam
    When I do rspec spec in my rails project, I get No examples were matched. Perhaps {:if=>#<Proc:0x0000010126e998@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:50 (lambda)>, :unless=>#<Proc:0x0000010126e970@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:51 (lambda)>} is excluding everything? Finished in 0.00004 seconds 0 examples, 0 failures Now, this seems like maybe if I wrote a spec it would work, but as soon as I write a spec (and I do include spec_helper) /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) from /{myapp}/app/models/user_session.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:138:in `block (2 levels) in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `block in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:108:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application/finisher.rb:41:in `block in <module:Finisher>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `instance_exec' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:50:in `block in run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:134:in `initialize!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:77:in `method_missing' from /{myapp}/config/environment.rb:5:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/spec_helper.rb:3:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/controllers/pages_controller_spec.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `block in load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `map' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/command_line.rb:18:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:55:in `run_in_process' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:46:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:10:in `block in autorun' The important line here seems to be /core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) Now if this were rails 2.3.8, I'd simply put config.gem "authlogic" into the environment.rb, in the initialization code block. However, the rails 3 environment.rb looks way different (there is no config code block, so putting it in arbitrarily causes an error where config is not defined). So my questions are 1) Do I actually have to put the gem config anywhere? I looked at https://github.com/trevmex/authlogic_rails3_example/ and it seems he didn't put it anywhere. 2) Does anyone know what I'm doing wrong in terms of rspec? My gem list is *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) arel (2.0.6, 1.0.1) asdf (0.5.0) authlogic (2.1.6, 2.1.3) autotest (4.4.6, 4.4.1) autotest-fsevent (0.2.4) autotest-growl (0.2.9) autotest-rails (4.1.0) autotest-rails-pure (4.1.2) bluecloth (2.0.9) builder (2.1.2) bundler (1.0.7, 1.0.2) cgi_multipart_eof_fix (2.5.0) commonwatir (1.6.2) couchrest (0.33) cri (1.0.1) cucumber (0.4.4, 0.4.3, 0.3.11) daemons (1.1.0, 1.0.10) dependencies (0.0.7) diff-lcs (1.1.2) erubis (2.6.6) fastercsv (1.5.0) fastthread (1.0.7) firewatir (1.6.2) flay (1.4.0) flog (2.2.0) funfx (0.2.2) gem_plugin (0.2.3) gemsonrails (0.7.2) giraffesoft-resource_controller (0.6.5) haml (2.2.14) hoe (2.3.3) i18n (0.4.1) jscruggs-metric_fu (1.1.5) json_pure (1.1.9) kramdown (0.12.0) mail (2.2.13, 2.2.6.1) memcache-client (1.8.5) mime-types (1.16) mojombo-chronic (0.3.0) mongrel (1.1.5) monk (0.0.7) nanoc (3.1.5) nanoc3 (3.1.5) nokogiri (1.4.3.1, 1.4.0) open4 (0.9.6) polyglot (0.3.1, 0.2.9) rack (1.2.1, 1.0.1) rack-mount (0.6.13) rack-test (0.5.6) rails (3.0.0, 2.3.4) rails3-generators (0.17.0, 0.14.0) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) relevance-rcov (0.9.2.1) rest-client (1.0.3) rspec (2.3.0, 2.0.0.rc, 1.2.9) rspec-core (2.3.1, 2.0.0.rc) rspec-expectations (2.3.0, 2.0.0.rc) rspec-mocks (2.3.0, 2.0.0.rc) rspec-rails (2.3.1, 2.0.0.rc, 1.2.9) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rvm (1.0.13) s4t-utils (1.0.4) safariwatir (0.3.7) sexp_processor (3.0.3) spork (0.7.3) sqlite3-ruby (1.3.1, 1.2.5) sys-uname (0.8.5) term-ansicolor (1.0.4) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.3, 0.12.0) treetop (1.4.8, 1.4.2) tzinfo (0.3.23) user-choices (1.1.6) vlad (2.0.0) vlad-git (2.1.0) webrat (0.7.1, 0.6.0, 0.5.3) xml-simple (1.0.12) ZenTest (4.4.2) I am using ruby 1.9.2 and rails 3.0.3 installed using RVM on OSX 10.6 Snow Leopard. I just want to be able to run my specs like I used to. As a separate issue, autotest yields an error about an include for autotest/growl but I installed autotest-growl. Maybe this is a gem issue? I tried doing the same things and get the same error when it comes to using my ubuntu 10.04 server machine though. Gemfile source 'http://rubygems.org' gem 'rails', '3.0.3' # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' group :couch do gem 'couchrest' end group :user_auth do gem 'authlogic' gem "rails3-generators" gem 'facebooker' end group :markup do gem 'haml' gem 'sass' end group :testing do gem 'rspec-rails' gem 'rspec' gem 'webrat' gem 'cucumber' gem 'capybara' gem 'factory_girl' gem 'shoulda' gem 'autotest' end group :server do gem 'unicorn' end # Use unicorn as the web server # gem 'unicorn' # Deploy with Capistrano # gem 'capistrano' # To use debugger # gem 'ruby-debug' # Bundle the extra gems: # gem 'bj' # gem 'nokogiri' # gem 'sqlite3-ruby', :require => 'sqlite3' # gem 'aws-s3', :require => 'aws/s3' # Bundle gems for the local environment. Make sure to # put test-only gems in this group so their generators # and rake tasks are available in development mode: # group :development, :test do # gem 'webrat' # end Gemfile.lock GEM remote: http://rubygems.org/ specs: ZenTest (4.4.2) abstract (1.0.0) actionmailer (3.0.3) actionpack (= 3.0.3) mail (~> 2.2.9) actionpack (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) erubis (~> 2.6.6) i18n (~> 0.4) rack (~> 1.2.1) rack-mount (~> 0.6.13) rack-test (~> 0.5.6) tzinfo (~> 0.3.23) activemodel (3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) i18n (~> 0.4) activerecord (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) arel (~> 2.0.2) tzinfo (~> 0.3.23) activeresource (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) activesupport (3.0.3) arel (2.0.6) authlogic (2.1.6) activesupport autotest (4.4.6) ZenTest (>= 4.4.1) builder (2.1.2) capybara (0.4.0) celerity (>= 0.7.9) culerity (>= 0.2.4) mime-types (>= 1.16) nokogiri (>= 1.3.3) rack (>= 1.0.0) rack-test (>= 0.5.4) selenium-webdriver (>= 0.0.27) xpath (~> 0.1.2) celerity (0.8.6) childprocess (0.1.6) ffi (~> 0.6.3) couchrest (1.0.1) json (>= 1.4.6) mime-types (>= 1.15) rest-client (>= 1.5.1) cucumber (0.10.0) builder (>= 2.1.2) diff-lcs (~> 1.1.2) gherkin (~> 2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) culerity (0.2.13) diff-lcs (1.1.2) erubis (2.6.6) abstract (>= 1.0.0) facebooker (1.0.75) json_pure (>= 1.0.0) factory_girl (1.3.2) ffi (0.6.3) rake (>= 0.8.7) gherkin (2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) haml (3.0.25) i18n (0.5.0) json (1.4.6) json_pure (1.4.6) kgio (2.0.0) mail (2.2.13) activesupport (>= 2.3.6) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) mime-types (1.16) nokogiri (1.4.4) polyglot (0.3.1) rack (1.2.1) rack-mount (0.6.13) rack (>= 1.0.0) rack-test (0.5.6) rack (>= 1.0) rails (3.0.3) actionmailer (= 3.0.3) actionpack (= 3.0.3) activerecord (= 3.0.3) activeresource (= 3.0.3) activesupport (= 3.0.3) bundler (~> 1.0) railties (= 3.0.3) rails3-generators (0.17.0) railties (>= 3.0.0) railties (3.0.3) actionpack (= 3.0.3) activesupport (= 3.0.3) rake (>= 0.8.7) thor (~> 0.14.4) rake (0.8.7) rest-client (1.6.1) mime-types (>= 1.16) rspec (2.3.0) rspec-core (~> 2.3.0) rspec-expectations (~> 2.3.0) rspec-mocks (~> 2.3.0) rspec-core (2.3.1) rspec-expectations (2.3.0) diff-lcs (~> 1.1.2) rspec-mocks (2.3.0) rspec-rails (2.3.1) actionpack (~> 3.0) activesupport (~> 3.0) railties (~> 3.0) rspec (~> 2.3.0) rubyzip (0.9.4) sass (3.1.0.alpha.206) selenium-webdriver (0.1.2) childprocess (~> 0.1.5) ffi (~> 0.6.3) json_pure rubyzip shoulda (2.11.3) sqlite3-ruby (1.3.2) term-ansicolor (1.0.5) thor (0.14.6) treetop (1.4.9) polyglot (>= 0.3.1) tzinfo (0.3.23) unicorn (3.1.0) kgio (~> 2.0.0) rack webrat (0.7.2) nokogiri (>= 1.2.0) rack (>= 1.0) rack-test (>= 0.5.3) xpath (0.1.2) nokogiri (~> 1.3) PLATFORMS ruby DEPENDENCIES authlogic autotest capybara couchrest cucumber facebooker factory_girl haml rails (= 3.0.3) rails3-generators rspec rspec-rails sass shoulda sqlite3-ruby unicorn webrat

    Read the article

  • Ruby Gems Not Installing, Hangs While Getting Gems

    - by Tim Hoolihan
    I recently cleared out all of my ruby install and installed form sources using the instructions at hivelogic I have have been able to install a few gems, but most of the time, "sudo gem install rails" hangs. I've added the -V flag, and it just seems to hang, I don't get any error. And the process can not be killed. I can only reboot to kill the process. My ruby info: [tim@ ~]# ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10.2.0] [tim@ ~]# gem -v 1.3.6 [tim@ ~]# gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /Users/tim/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://gems.rubyforge.org/", "http://gems.rubyforge.org"] - REMOTE SOURCES: - http://gems.rubyforge.org/ - http://gems.rubyforge.org [tim@ ~]# which ruby /usr/local/bin/ruby [tim@ ~]# which gem /usr/local/bin/gem [tim@ ~]# uname -a Darwin tim-hoolihans-macbook-pro-15.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 [tim@ ~]# Any ideas?

    Read the article

  • Is Rails Metal (& Rack) a good way to implement a high traffic web service api?

    - by Greg
    I am working on a very typical web application. The main component of the user experience is a widget that a site owner would install on their front page. Every time their front page loads, the widget talks to our server and displays some of the data that returns. So there are two components to this web application: the front end UI that the site owner uses to configure their widget the back end component that responds to the widget's web api call Previously we had all of this running in PHP. Now we are experimenting with Rails, which is fantastic for #1 (the front end UI). The question is how to do #2, the back serving of widget information, efficiently. Obviously this is much higher load than the front end, since it is called every time the front page loads on one of our clients' websites. I can see two obvious approaches: A. Parallel Stack: Set up a parallel stack that uses something other than rails (e.g. our old PHP-based approach) but accesses the same database as the front end B. Rails Metal: Use Rails Metal/Rack to bypass the Rails routing mechanism, but keep the api call responder within the Rails app My main question: Is Rails/Metal a reasonable approach for something like this? But also... Will the overhead of loading the Rails environment still be too heavy? Is there a way to get even closer to the metal with Rails, bypassing most of the environment? Will Rails/Metal performance approach the perf of a similar task on straight PHP (just looking for ballpark here)? And... Is there a 'C' option that would be much better than both A and B? That is, something before going to the lengths of C code compiled to binary and installed as an nginx or apache module? Thanks in advance for any insights.

    Read the article

  • Android SDK fails to install

    - by Paul Breed
    When I try to install the android SDK it fails to install. My OS is Windows XP I just downloaded and installed Java JDK 1.6 Java -version from the command line returns: http://stackoverflow.com/questions/ask java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing) My environment vars have: JAVA_HOME=c:\progra~1\java\jdk1.6.0_11 I downloaded android-sdk-r04-windows.zip I unziped it in V:\AndroidInstall\ When I go to the V:\androindinstall\android-sdk-windows and type "SDK Install.exe" nothing happens... If I go this from graph When I do this from a graphical file viewer I get a quick flash that looks like a command line window and nothing.... When I try to run android list targets from the tool directory I get: Error: Error parsing the sdk. Error: V:\androindinstall\android-sdk-windows\platforms is missing. Error: Unable to parse SDK content. So the basic install setup is not happening. Additional clues: I have a G1 and Android 1.0 was running on this machine. (Almost a year ago) I've updated my G1 to 1.6 so I thought I'd update my SDK before starting new development. When I tried to upgrade it tried and then died as the "directory was in use" So I cleaned out all the android directories, rebooted and redownloaded everythign from scratch. Now it won't run at all. I've clearly got something in an unhappy state, but I've cleaned up all the directories and no remanants seem to be running I've rebooted.... I've missed somethign I just can't figure out what. Paul

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >