Search Results

Search found 6580 results on 264 pages for 'require'.

Page 17/264 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Ruby Nokogiri uninitialized constant

    - by donald
    `<main>': uninitialized constant Object::Nakogiri (NameError) I get that message when trying to run a simple code (ruby test.rb): require 'rubygems' require 'nokogiri' require 'open-uri' url = "http://www.walmart.com/cp/Baby-Days/1035659?povid=cat14503-env172199-module122910-lLinksptBABY" doc = Nakogiri::HTML(open(url)) puts doc.at_css("title").text I have the gem installed: ~/Code $ gem list --local | grep nokogiri nokogiri (1.4.4, 1.4.3.1)

    Read the article

  • Easiest way to split up a large controller file

    - by timpone
    I have a rails controller file that is too large (~900 lines - api_controller). I'd like to just split it up like something like this: api_controller.rb api_controller_item_admin.rb api_controller_web.rb I don't want to split into multiple controllers. What would be the preferred way to do this? Could I just require the new parts at the end? like: require './api_controller_item_admin' require './api_controller_web'

    Read the article

  • Same-directory includes failing on a Fedora server with PHP.

    - by JimmySawczuk
    I have a couple files that look like this: index.php: <?php include('includes/header.php'); ... includes/header.php: <?php include('config.php'); ... The error I get is Warning: require(config.php) [function.require]: failed to open stream: No such file or directory in [dir]/includes/header.php on line 2 Fatal error: require() [function.require]: Failed opening required 'config.php' (include_path='.:/usr/share/pear:/usr/share/php') in [dir]/includes/header.php on line 2 I did some further debugging: when I add the call system('pwd'); to includes/header.php, it shows [dir], where it should say [dir]/includes. Adding the 'includes/' to the include path works, but isn't desirable because that would fail on the production server. The above code works on a production server, and worked fine on my development Fedora server, until I tried to change my development environment so that the Fedora server's document root is a mounted CIFS share. Any ideas? Thanks.

    Read the article

  • How should I do a loop a nokogiri search in ruby?

    - by kim
    I have the following that I retreive the title of each url from an array that contains a list of urls. require 'rubygems' require 'nokogiri' require 'open-uri' @urls = ["http://google.com", "http://yahoo.com", "http://rubyonrails.org"] @found_titles = Array.new @found_titles[0] = Nokogiri::HTML(open("#{@urls[0]}")).search("title").inner_html #this can go on forever...but #@found_titles[1] = Nokogiri::HTML(open("#{@urls[1]}")).search("title").inner_html #@found_titles[2] = Nokogiri::HTML(open("#{@urls[2]}")).search("title").inner_html puts "#{@found_titles[0]}" How should i form a loop method for this so i can get the title even when the list in @url array gets longer.

    Read the article

  • RAKE won'tt create xml file

    - by user296507
    hi, i'm a bit lost here as to why my RAKE task will not create the desired XML file, however it works fine when i have the method 'build_xml' in the .RB file. require 'rubygems' require 'nokogiri' require 'open-uri' namespace :xml do desc "xml build test" task :xml_build => :environment do build_xml end end def build_xml #build xml docoument builder = Nokogiri::XML::Builder.new do |xml| xml.root { xml.location { xml.value "test" } } end File.open("test.xml", 'w') {|f| f.write(builder.to_xml) } end

    Read the article

  • Rails 3) Delete, Destory, and Routing

    - by Maximus S
    The problem is the code below <%= button_to t('.delete'), @post, :method => :delete, :class => :destroy %> My Post model has many relations that are dependent on delete. However, the code above will only remove the post, leaving its relations intact. The problem is that methods delete and destroy are different in that method delete doesn't instantiate the object. So I need to use "destroy" instead of "delete" my post. <%= button_to t('.delete'), @post, :method => :destroy %> gives me routing error. No route matches [POST] "/posts/2" <%= button_to t('.delete'), @post, Post.destroy(@post) %> deletes the post without clicking the button. Could anyone help me with this? UPDATE: application.js //= require jquery //= require jquery-ui //= require jquery_ujs //= require bootstrap-modal //= require bootstrap-typeahead //= require_tree . rake routes DELETE (/:locale)/posts/:id(.:format) posts#destroy Post model has_many :tag_links, :dependent => :destroy has_many :tags, :through => :tag_links Tag model has_many :tag_links, :dependent => :destroy has_many :posts, :through => :tag_links Problem: When I delete a post, all the tag_links are destroyed but tags still exist.

    Read the article

  • In PLT scheme, can I export functions after another function has been called?

    - by Jason Baker
    I'm trying to create a binding to libpython using scheme's FFI. To do this, I have to get the location of python, create the ffi-lib, and then create functions from it. So for instance I could do this: (module pyscheme scheme (require foreign) (unsafe!) (define (link-python [lib "/usr/lib/libpython2.6.so"]) (ffi-lib lib)) This is all well and good, but I can't think of a way to export functions. For instance, I could do something like this: (define Py_Initialize (get-ffi-obj "Py_Initialize" libpython (_fun -> _void))) ...but then I'd have to store a reference to libpython (created by link-python) globally somehow. Is there any way to export these functions once link-python is called? In other words, I'd like someone using the module to be able to do this: (require pyscheme) (link-python) (Py_Initialize) ...or this: (require pyscheme) (link-python "/weird/location/for/libpython.so") (Py_Initialize) ...but have this give an error: (require pyscheme) (Py_Initialize) How can I do this?

    Read the article

  • How can I change ruby log level in unit tests based on context

    - by Stuart
    I'm new to ruby so forgive me if this is simple or I get some terminology wrong. I've got a bunch of unit tests (actually they're integration tests for another project, but they use ruby test/unit) and they all include from a module that sets up an instance variable for the log object. When I run the individual tests I'd like log.level to be debug, but when I run a suite I'd like log.level to be error. Is it possible to do this with the approach I'm taking, or does the code need to be restructured? Here's a small example of what I have so far. The logging module: #!/usr/bin/env ruby require 'logger' module MyLog def setup @log = Logger.new(STDOUT) @log.level = Logger::DEBUG end end A test: #!/usr/bin/env ruby require 'test/unit' require 'mylog' class Test1 < Test::Unit::TestCase include MyLog def test_something @log.info("About to test something") # Test goes here @log.info("Done testing something") end end A test suite made up of all the tests in its directory: #!/usr/bin/env ruby Dir.foreach(".") do |path| if /it-.*\.rb/.match(File.basename(path)) require path end end

    Read the article

  • Is the SAN dying???

    - by RickHeiges
    Is the SAN dying? The reason that I ask this question is that MSFT has unleashed technologies this year that point in that direction Always ON Availability Groups shuns shared storage Windows 2012 has Storage Replication Technology that does not require a SAN Windows 2012 has Hyper-V Replica Technology that does not require a SAN PDW v2 continues to reinforce the approach to avoid shared storage I'm not saying that SAN technology does not have its place or does not have benefits inherent to the beast....(read more)

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Gosu ruby windows no allocator for Image [on hold]

    - by user2812818
    I am trying to run the Gosu tutorial on Windows XP for ruby 1.93 It quits with `new': allocator undefined for Gosu::Image (TypeError) when trying to initialize a new Image: require 'gosu' require 'rubygems' class GameWindow < Gosu::Window def initialize super(640, 480, false) self.caption = "Gosu Tutorial Game" @background_image = Gosu::Image.new(self, "/media/123.bmp", true) end end I made sure the image is there and is png/bmp. I know it is something simple, maybe to do with the DLL's required? just not sure what.... thanks sgv

    Read the article

  • Don't list all users at login with LightDM

    - by Bryan
    I just upgraded to Ubuntu 11.10 and I was wondering if it's possible to not list all the current users and instead require the user to type in their username? My company's IT policies require that users not be listed on login screens. In Ubuntu 11.04, I was able to do this with the following commands... $ sudo -u gdm gconftool-2 --type boolean --set /apps/gdm/simple-greeter/disable_user_list true

    Read the article

  • The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

    Although many professions, such as pilots, surgeons and IT administrators, require judgement and skill, they also require the ability to do many repeated standard procedures in a consistent and methodical manner. These procedures leave little room for creativity since they must be done right, and in the right order. For DBAs, standardization involves providing and following checklists, notes and instructions so that the results are predictable, correct and easy to maintain

    Read the article

  • Subversion all or nothing access to repo tree

    - by Glader
    I'm having some problems setting up access to my Subversion repositories on a Linux server. The problem is that I can only seem to get an all-or-nothing structure going. Either everyone gets read access to everything or noone gets read or write access to anything. The setup: SVN repos are located in /www/svn/repoA,repoB,repoC... Repositories are served by Apache, with Locations defined in etc/httpd/conf.d/subversion.conf as: <Location /svn/repoA> DAV svn SVNPath /var/www/svn/repoA AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> <Location /svn/repoB> DAV svn SVNPath /var/www/svn/repoB AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> ... svn-access.conf is set up as: [/] * = [/repoA] * = userA = rw [/repoB] * = userB = rw But checking out URL/svn/repoA as userA results in Access Forbidded. Changing it to [/] * = userA = r [/repoA] * = userA = rw [/repoB] * = userB = rw gives userA read access to ALL repositories (including repoB) but only read access to repoA! so in order for userA to get read-write access to repoB i need to add [/] userA = rw which is mental. I also tried changing Require valid-user to Require user userA for repoA in subversion.conf, but that only gave me read access to it. I need a way to default deny everyone access to every repository, giving read/write access only when explicitly defined. Can anyone tell me what I'm doing wrong here? I have spent a couple of hours testing and googling but come up empty, so now I'm doing the post of shame.

    Read the article

  • Use Apache authentication to Segregate access to Subversion subdirectories

    - by Stefan Lasiewski
    I've inherited a Subversion repository, running on FreeBSD and using Apache2.2 . Currently, we have one project, which looks like this. We use both local files and LDAP for authentication. <Location /> DAV svn SVNParentPath /var/svn AuthName "Staff only" AuthType Basic # Authentication through Local file (mod_authn_file), then LDAP (mod_authnz_ldap) AuthBasicProvider file ldap # Allow some automated programs to check content into the repo # mod_authn_file AuthUserFile /usr/local/etc/apache22/htpasswd Require user robotA robotB # Allow any staff to access the repo # mod_authnz_ldap Require ldap-group cn=staff,ou=PosixGroup,ou=foo,ou=Host,o=ldapsvc,dc=example,dc=com </Location> We would like to allow customers to access to certain subdirectories, without giving them global access to the entire repository. We would prefer to do this without migrating these sub-directories to their own repositories. Staff also need access to these subdirectories. Here's what I tried: <Location /www.customerA.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerA Require user customerA </Location> <Location /www.customerB.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerB Require user customerB </Location> I've tried the above. Access to '/' works for staff. However, access to /www.customerA.com and /www.customerB.com does not work. It looks like Apache is trying to authenticate the 'customerB' against LDAP, and doesn't try local password file. The error is: [Mon May 03 15:27:45 2010] [warn] [client 192.168.8.13] [1595] auth_ldap authenticate: user stefantest authentication failed; URI /www.customerB.com [User not found][No such object] [Mon May 03 15:27:45 2010] [error] [client 192.168.8.13] user stefantest not found: /www.customerB.com What am I missing?

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • Active Directory problems while trying to perfom compare operation

    - by Alex
    I have CentOs 5.5 with Apache 2.2 and SVN installed. Also I have Windows 2003 R2 with Active Directory. I'm trying to authorize users via AD so each user have access to repo if he is a member of corespondent group in AD. Here is my apache config: LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so LDAPVerifyServerCert off ServerName svn.mydomain.com DocumentRoot /var/www/svn.mydomain.com/htdocs RewriteEngine On [Location /] AuthType basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPURL ldaps://comp1.mydomain.com:636/DC=mydomain,DC=com?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN [email protected] AuthLDAPBindPassword binduserpassword [/Location] [Location /repos/test] DAV svn SVNPath /var/svn/repos/test AuthName "SVN repository for test" Require ldap-group CN=test,CN=ProjectGroups,DC=mydomain,DC=com [/Location] When I'm using "Require valid-user" everything goes fine, "Require ldap-user" also works. But as soon as I use "Require ldap-group" authorization fails. Trere are no errors in apache logs, but Active Directory shows folowing error: Event Type: Information Event Source: NTDS LDAP Event Category: LDAP Interface Event ID: 1138 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal event: Function ldap_compare entered. Event Type: Error Event Source: NTDS General Event Category: Internal Processing Event ID: 1481 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal error: The operation on the object failed. Additional Data Error value: 2 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'DC=mydomain,DC=com' I'm confused by this problem. What I'm doing wrong?

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Specify IPSEC port range using ipsec-tools

    - by Sandman4
    Is it possible to require IPSEC on a port range ? I want to require IPSEC for all incoming connections except a few public ports like 80 and 443, but don't want to restrict outgoing connections. My SPD rules would look like: spdadd 0.0.0.0/0 0.0.0.0/0[80] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[443] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[0....32767] tcp -P in esp/require/transport; In setkey manpage I see IP ranges, but no mention of port ranges. (The idea is to use IPSEC as a sort of VPN to protect internal communications between multiple servers. Instead of configuring permissions basing on source IPs, or configuring specific ports, I want to demand IPSEC on anything which is not meant to be public - I feel it's less error-prone this way.)

    Read the article

  • can't run cucumber scenarios due to test-unit version issue on Rails 2.3.5, Ruby 1.9.1

    - by Jeff D
    I've been trying to follow along in the RSpec book, (I'm new to all of this) and I have what appears to be some kind of versioning issue. If I try and run some simple scenarios, I get this error: can't activate test-unit (= 1.2.3, runtime) for [], already activated test-unit-2.0.7 for [] (Gem::LoadError) /Users/jeffdeville/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/site_ruby/1.9.1/rubygems.rb:230:in activate' /Users/jeffdeville/.rvm/rubies/ruby-1.9.1-p378/lib/ruby/site_ruby/1.9.1/rubygems.rb:1056:ingem' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/rspec-1.3.0/lib/spec/interop/test.rb:4:in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/rspec-1.3.0/lib/spec/test/unit.rb:1:in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/rspec-rails-1.3.2/lib/spec/rails.rb:13:in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-rails-0.3.0/lib/cucumber/rails/rspec.rb:15:in rescue in <top (required)>' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-rails-0.3.0/lib/cucumber/rails/rspec.rb:3:in' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' /Users/jeffdeville/code/showtime/Features/support/env.rb:11:in' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:in require' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.1/lib/polyglot.rb:64:inrequire' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/rb_support/rb_language.rb:124:in load_code_file' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:85:inload_code_file' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:77:in block in load_code_files' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:76:ineach' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/step_mother.rb:76:in load_code_files' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/cli/main.rb:48:inexecute!' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/lib/cucumber/cli/main.rb:20:in execute' /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.4/bin/cucumber:8:in' script/cucumber:9:in load' script/cucumber:9:in' however, uninstalling 2.0.7 yields the error: Missing these required gems: test-unit = 2.0.7 You're running: ruby 1.9.1.378 at /Users/jeffdeville/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.6 at /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378, /Users/jeffdeville/.rvm/gems/ruby-1.9.1-p378@global Run rake gems:install to install the missing gems. Sorry this is probably something easy, but I just don't know ruby or rails well enough yet.

    Read the article

  • Windows 2008 CAL vs RDS CAL

    - by g8keepa82
    Looking at the Win2k8 licensing page here and it appears to me that if I want to have a server to accept Remote Desktop Connections from say 30 users concurrently, I would require: Windows 2008 Server License & Windows 2008 CAL Is this correct logic? Or would I require RDS CALs instead? Or would I actually require RDS CALs on top of that? From what I can gather the RDS CALs are only required if I was to use the additional RDS services like App-V, etc. This question may have been answered here before but just wanted to clarify. Can anyone help?

    Read the article

  • phpmyadmin forbidden after changing config for my IP

    - by Jonathan Kushner
    I followed the phpmyadmin setup and changed the config to require ip my ipaddress and allow from my ipaddress and its still telling me forbidden You don't have permission to access /phpmyadmin on this server. when I try to access the page on my browser (my server is not located on my machine). I installed everything using root. I also chmod 775 the entire phpMyAdmin folder. Im running RHEL 6.1. Any idea what to do at this point? Here is my /etc/httpd/conf.d/phpMyAdmin.conf: <Directory /usr/share/phpMyAdmin/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip myserveripaddress Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Deny from All Allow from myserveripaddress Allow from ::1 </IfModule> </Directory>

    Read the article

  • apc.stat causes 500 internal server error

    - by Legit
    When I turn off apc.stat it causes a 500 internal server error. I checked the apache error_log and it's something about: [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Warning: require(): Filename cannot be empty in /var/www/site1/public/index.php on line 17 [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/site1/public/index.php on line 17 I checked that line and here's what it contains: require('./wp-blog-header.php'); I don't see anything wrong with it. Here's my current APC config: APC version: 3.1.10 PHP Version: 5.4.4 How do I resolve this error when i disable apc.stat?

    Read the article

  • Is it possible to restrict fileserver access to domain users using computers that are members of the domain?

    - by Chris Madden
    It seems domain isolation can be used to accomplish, but I'd like a solution that doesn't require IPsec, or more accurately, doesn't require IPsec on the fileserver. IPsec if done in software has a large CPU overhead and our NAS boxes don't support any kind of offload. The goal is to avoid authenticated users using non-managed machines to access network resources. Network Access Protection (NAP) and the various enforcement points looked promsiing but I couldn't find a bulletproof way to use them [which doesn't require IPsec on the fileserver]. I was thinking when a domain user accesses the NAS box it will first need a Kerberos ticket from AD, so if AD could somehow verify the computer that was requesting the ticket was in the domain I'd have a solution.

    Read the article

  • How to place additional access-restrictions on a subdirectory in Apache?

    - by Mikhail T.
    We have a list of "internal" IP-addresses and only allow access to the server (Location /) from that list: <Location /> Require ip x.x.x.x Require ip y.y.y.y </Location> I need to further restrict access to a sub-directory (Location /foo) to authenticated users (Require valid-user). Whatever I do, I never get prompted for login to access /foo -- Apache simply grants me access, because my IP-address is on the list (for Location /). I cycled through all three different values of AuthMerging (off, and, or) to no avail... Must be something really stupid :-/ Using httpd-2.4.6. Thank you!

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >