Search Results

Search found 18024 results on 721 pages for 'ruby enterprise edition'.

Page 159/721 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Winners of the Oracle Excellence Award—Eco-Enterprise Innovation

    - by Evelyn Neumayr
    Did you get a chance to attend Oracle OpenWorld in San Francisco? With 60,000 attendees and hundreds of sessions to choose from—there was a lot going on. One of my favorite sessions was the Eco-Enterprise Awards and Sustainability Executive Panel Discussion. During this session, Jeff Henley, Oracle Chairman of the Board, announced the winners of the 2013 Oracle Excellence Award—Eco-Enterprise Innovation. It was an enlightening session as we heard several of the winning customers discuss the importance of sustainability to their company and how they’re using various Oracle products to help with their sustainability initiatives. The winning customers include: Centennial Coal, Indaver nv, Korea Enterprise Data, National Guard Health Affairs, Schneider National, SThree, Telstra International Group, Trex Company, University of Salzburg, Walmart, and Yeoncheon County Office. Stay tuned for additional blogs where you’ll learn more about these winning companies’ environmental best practices and why they won this award. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Several partners were also recognized for helping these customers with their sustainability initiatives. Those partners include: CSS International, Daesang Information Technology, i4BI, Infosys, Knowledge Global, Solutions for Retails Brands Limited, and SysGen. During this same session, Jeff Henley also awarded Robert Kaplan, Director of Sustainability at Walmart, with Oracle’s Chief Sustainability Officer of the Year award. Robert was honored for helping improve Walmart’s supply chain efficiency with their Sustainability Hub. The Sustainability Hub, powered by Oracle Service Cloud, is a central location for Walmart suppliers, associates and business partners to learn, connect, inspire and drive sustainability through collaboration. While at Oracle OpenWorld, I also got a chance to hear Robert Kaplan discuss their Sustainability Hub during an Oracle OpenWorld Live taping. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Virtual Brown Bag: Ruby Newbies, Mockups, There *is* an I in SOLID, fuv

    - by Brian Schroer
    At this week's Virtual Brown Bag meeting: Claudio pointed us to Try Ruby! and Rails For Zombies, two sites to educate Ruby newbies We looked at the free version of Balsamiq, and other online mockup sites George walked us through a refactoring to isolate roles and adhere to the Interface Segregation Principle (the "I" in SOLID) We laughed at fuv, the code editor for "real programmers" For detailed notes, links, and the video recording, go to the VBB wiki page: https://sites.google.com/site/vbbwiki/main_page/2011-02-10

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Equivalent of #map in ruby in golang

    - by Oct
    I'm playing with Go and run into something I'm unable to find in Google, although there is certainly something that exists: I'm using the following struct: type Syntax struct { name string extensions *regexp.Regexp } type Scanner struct { classifier * bayesian.Classifier save_file string name_to_syntax map[string] *Syntax extensions_to_syntax map[*regexp.Regexp] *Syntax } I'd like to perform the following using Go and I'm quoting ruby because it's how I'd do that using ruby: test_regexpes = my_scanner.extensions_to_syntax.keys My goal is to get an array of *regexp.Regexp . Any idea on how to do that in a simple way ? Thank you !

    Read the article

  • Ruby on Rails - A Competent Web Development Application

    Written in Ruby programming language, Ruby on Rails is one of the most frequently used web application development framework. Often termed as RoR or Rails, it is an open source web development framework which is basically an object oriented programming language encouraging simple development, complete and potent web applications encompassing rich interactivity and functionality.

    Read the article

  • Authlogic openid: getting undefined method openid_identifier? error in functional test

    - by usr
    I use Authlogic with the Authlogic-openid addon (I gem installed ruby- openid and script/plugin install git://github.com/rails/open_id_authentication.git) and get two errors. First when running functional test, I get an undefined method openid_identifier? message on a line in my new.html.erb file when running the UsersControllerTest. The line is: <% if @user.openid_identifier? %> When running script/console I can access this method without any problem. Second when testing the openid functionality and registering a new user to my application using openid and using my blogspot account for that I get the following in my log file: Generated checkid_setup request to http://www.blogger.com/openid-server.g with assocication ... Redirected to http://www.blogger.com/openid-server.g?openid.assoc_handle=... NoMethodError (You have a nil object when you didn't expect it! The error occurred while evaluating nil.call): app/controllers/users_controller.rb:44:in `create' /usr/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' /usr/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' /usr/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' /usr/lib/ruby/1.8/webrick/server.rb:162:in `start' /usr/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' /usr/lib/ruby/1.8/webrick/server.rb:95:in `start' /usr/lib/ruby/1.8/webrick/server.rb:92:in `each' /usr/lib/ruby/1.8/webrick/server.rb:92:in `start' /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' The code in the users_controller is straight forward: def create respond_to do |format| @user.save do |result| if result flash[:notice] = t('Thanks for signing up!') format.html { redirect_to :action => 'index' } format.xml { render :xml => @user, :status => :created, :location => @user } else format.html { render :action => "new" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end end The line giving the error being @user.save do |result|... I feel I'm missing something pretty basic but I have been staring at this for too long because I can't find what it is. I checked with the code on Railscasts episodes 160 and 170 and the bones GitHub project but found nothing. Thanks for your help, usr

    Read the article

  • Why won't ruby recognize Haml under ubuntu64 while using jekyll static blog generator?

    - by oldmanjoyce
    I have been trying, quite unsuccessfully, to run henrik's fork of the jekyll static blog generator on Ubuntu 64-bit. I just can't seem to figure this out and I've tried a bunch of different things. Originally I posted this over at stackoverflow, but this is probably the better spot for it. The base stats of my machine: Ubuntu 9.04, 64 bit, ruby 1.8.7 (2008-08-11 patchlevel 72) [x86_64-linux], rubygems 1.3.1. When I attempt to build the site, this is what happens: $ jekyll --pygments Configuration from ./_config.yml Using Sass for CSS generation You must have the haml gem installed first Using rdiscount for Markdown Building site: . - ./_site /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/core_ext.rb:27:in `method_missing': undefined method 'header' for #, page=# ..... cut ..... (NoMethodError) from (haml):9:in `render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'instance_eval' from /home/chris/.gem/gems/haml-2.2.3/lib/haml/engine.rb:167:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:72:in 'render_haml_in_context' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/convertible.rb:105:in 'do_layout' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/post.rb:226:in 'render' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:172:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'each' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:171:in 'read_posts' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:210:in 'transform_pages' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/../lib/jekyll/site.rb:126:in 'process' from /home/chris/.gem/gems/henrik-jekyll-0.5.2/bin/jekyll:135 from /home/chris/.gem/bin/jekyll:19:in `load' from /home/chris/.gem/bin/jekyll:19 I added spaces to the left of the ClosedStruct to enable better visibility - sorry that my inline html/formatting isn't perfect. I also cut out some middle text that is just data. $ gem list *** LOCAL GEMS *** actionmailer (2.3.4) actionpack (2.3.4) activerecord (2.3.4) activeresource (2.3.4) activesupport (2.3.4) classifier (1.3.1) directory_watcher (1.2.0) haml (2.2.3) haml-edge (2.3.27) henrik-jekyll (0.5.2) liquid (2.0.0) maruku (0.6.0) open4 (0.9.6) rack (1.0.0) rails (2.3.4) rake (0.8.7) rdiscount (1.3.5) RedCloth (4.2.2) stemmer (1.0.1) syntax (1.0.0) Some showing for path verification: $ echo $PATH /home/chris/.gem/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games $ which haml /home/chris/.gem/bin/haml $ which jekyll /home/chris/.gem/bin/jekyll

    Read the article

  • Three Easy Ways of Providing Feedback to the Oracle AutoVue Team

    - by Celine Beck
    Customer feedback is essential in helping us deliver best-in-class Enterprise Visualization solutions which are centered around real-world usage. As the Oracle AutoVue Product Management team is busy prioritizing the next round of improvements, enhancements and new innovation to the AutoVue platform, I thought it would be a good idea to provide our blog-readers with a recap of how best to provide product feedback to the AutoVue Product Management team. This gives you the opportunity to help shape our future agenda and make our solutions better for you. Enterprise Visualization Special Interest Group (EV SIG): the AutoVue EV SIG is a customer-driven initiative that has recently been created to share knowledge and information between members and discuss common and best practices around Enterprise Visualization. The EV SIG also serves as a mechanism for establishing and communicating to AutoVue Product Management users’ collective priorities for the future development, direction and enhancement of the AutoVue product family with the objective of ensuring their continuous improvement. Essentially, EV SIG members meet in order to share and prioritize feedback and use this input to begin dialog with the AutoVue Product Management team on what they deem to be the most important improvements to Enterprise Visualization solutions. The AutoVue EV SIG is by far the best platform for sharing and relaying feedback to our Product Strategy / Management team regarding general product enhancements, industry-specific scenarios, new use cases, usability, support, deployability, etc, and helping us shape the future direction of Enterprise Visualization solutions. We strongly encourage ALL our customers to sign up for the SIG;  here is how you can do so: Sign up for the EVSIG mailing list b.    Visit the group’s website c.    Contact Dennis Walker at Harris Corporation directly should you have any questions: dwalke22-AT-harris-DOT-com Customer / Partner Advisory Boards: The AutoVue Product Strategy / Management team also periodically runs Customer and Partner Advisory Boards. These invitation-only events bring together individuals chosen from Oracle AutoVue’s top customers that are using AutoVue at the enterprise level, as well as strategic partners.  The idea here is to establish open lines of communication between top customers and partners and the Oracle AutoVue Product Strategy team, help us communicate AutoVue’s product direction, share perspectives on today and tomorrow’s challenges and needs for Enterprise Visualization, and validate that proposed additions to the product are valid industry solutions. Our next Customer / Partner Advisory Board will be held in San Francisco during Oracle Open World, please contact your account manager to find out more about the CAB Members’ nomination process. Enhancement Requests:  Enhancement requests are request logged by customers or partners with Product Development for a feature that is not currently available in Oracle AutoVue. Enhancement requests (ER) can be logged easily via the My Oracle Support portal. This is the best way to share feedback with us at the functionality level; for instance if you would like to see a new format supported in AutoVue or make suggestions as per how certain functionality can be improved or should behave. Once the ER is logged, it is then evaluated by Product Management based on feasibility, product adequation and business justification. Product Management then decides whether to consider this ER for future release or not. What helps accelerate the process is hearing from a large number of customers who urgently need a particular feature or configuration. Hence the importance of logging Metalink Service Requests, and describing in details your business expectations. You can include key milestones dates and justifications as to why this request is important and the benefits your organization stands to gain should this request be accepted. Again, feedback from customers and partners is critical to ensure we offer solutions that have the biggest impact on customers’ business processes and day-to-day operations. All feedback is welcome,. So please don’t be shy! 

    Read the article

  • Redmine with Apache 2 + Passenger nightmare --- site is up and available, but Redmine doesn't execute

    - by CptSupermrkt
    I was determined to figure this out myself, but I've been at it for a total of more than 10 hours, and I just can't figure this out. First, let me detail my environment (which I cannot change): Server version: Apache/2.2.15 (Unix) Ruby version: ruby 1.9.3p448 Rails version: Rails 4.0.1 Passenger version: Phusion Passenger version 4.0.5 Redmine version: 2.3.3 I have followed the Redmine instructions all the way through the test webserver to check that installation was successful with this command: ruby script/rails server webrick -e production The roadblock which I cannot overcome is getting Apache and Passenger to interpret and properly serve Redmine. I have searched pretty much every possible link within the first 10 pages or so of Google results. Everywhere I go I come across conflicting/contradicting/outdated information. We have a "weird" setup with Apache (which I inherited and cannot change). Redmine needs to be served through SSL, but Apache already has another website it's serving through SSL called Twiki. By "weird", what I mean is that our file structure is entirely different from all the tutorials out there on this version of Apache which have directories like "available-sites" and such. Here are the abbreviated versions of some of our config files: /etc/httpd/conf/httpd.conf (the global configuration file --- note that NO VirtualHost is defined here): ServerRoot "/etc/httpd" ... LoadModule passenger_module /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5/libout/apache2/mod_passenger.so PassengerRoot /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5 PassengerDefaultRuby /usr/local/pkg/ruby/1.9.3-p448/bin/ruby Include conf.d/*.conf ... User apache Group apache ... DocumentRoot "/var/www/html" So just to clarify, the above httpd.conf file does NOT have a VirtualHost section. /etc/httpd/conf.d/ssl.conf (defines the VirtualHost for ssl): Listen 443 <VirtualHost _default_:443> SSLEngine on ... SSLCertificateFile /etc/pki/tls/certs/localhost.crt </VirtualHost> /etc/httpd/conf.d/twiki.conf (this works just fine --- note this does NOT define a VirtualHost): ScriptAlias /twiki/bin/ "/var/www/twiki/bin/" Alias /twiki/ "/var/www/twiki/" <Directory "/var/www/twiki/bin"> AllowOverride None Order Deny,Allow Deny from all AuthType Basic AuthName "our team" AuthBasicProvider ldap ...a lot of ldap and authorization stuff Options ExecCGI FollowSymLinks SetHandler cgi-script </Directory> /etc/httpd/conf.d/redmine.conf: Alias /redmine/ "/var/www/redmine/public/" <Directory "/var/www/redmine/public"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> The amazing thing is that this doesn't completely NOT work: I can successfully open up https://someserver/redmine/ with SSL and the https://someserver/twiki/ site remains unaffected. This tells me that it IS possible to have two separate sites up with one SSL configuration, so I don't think that's the problem. The problem is is that it opens up to the file index. I can navigate around my Redmine file structure, but no code ever gets executed. For example, there is a file included with Redmine called dispatch.fcgi in the public folder. https://someserver/redmine/dispatch.fcgi opens, but just as plain text code in the browser. As I understand it, in the case of using Passenger, CGI and FastCGI stuff is irrelevant/unused.

    Read the article

  • Why did Matz choose to make Strings mutable by default in Ruby?

    - by Seth Tisue
    It's the reverse of this question: http://stackoverflow.com/questions/93091/why-cant-strings-be-mutable-in-java-and-net Was this choice made in Ruby only because operations (appends and such) are efficient on mutable strings, or was there some other reason? (If it's only efficiency, that would seem peculiar, since the design of Ruby seems otherwise to not put a high premium on faciliating efficient implementation.)

    Read the article

  • how can I capture response from twitter.com? ( ruby + twitter gem)

    - by Radek
    how can I capture response from twitter.com? To make sure that everything went ok? I am using ruby and ruby twitter gem and the my code is basically like that oauth = Twitter::OAuth.new('consumer token', 'consumer secret') oauth.authorize_from_access('access token', 'access secret') client = Twitter::Base.new(oauth) client.update('Heeeyyyyoooo from Twitter Gem!')

    Read the article

  • Can someone explain/annotate this Ruby snippet with comments?

    - by Ronnie
    Could someone please add comments to this code? Or, alternatively, what would be the pseudocode equivalent of this Ruby code? It seems simple enough but I just don't know enough Ruby to convert this to PHP. data = Hash.new({}) mysql_results.each { |r| data[r['year']][r['week']] = r['count'] } (year_low..year_high).each do |year| (1..52).each do |week| puts "#{year} #{week} #{data[year][week]}" end end Any help whatsoever would be really appreciated. Thanks!

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >