Search Results

Search found 6580 results on 264 pages for 'require'.

Page 16/264 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • puppet rspec no such file to load -- rspec-puppet (LoadError)

    - by Vorsprung
    I have no prior experience at all of ruby. I am not interested in ruby (and so have no knowledge of rails etc) as such but am using puppet to manage a group of servers. I have written some modules and the rspec-puppet system looks like it would be very useful. However, I cannot get rspec-puppet to work I am using Ubuntu LTS 10.04 I have installed puppet rspec using the directions on their web page What I actually did apt-get install rubygems # (installs 1.8) gem install rspec-expectations gem install rspec-puppet I also installed librspec-ruby1.8 Then I ran rspec-puppet-init in a puppet module directory I'd already made (it's a working puppet module) I made a file as defined in the tutorial $ more spec/defines/rule_spec.rb require 'spec_helper' describe 'vanusers::rule' do let(:title) { 't1' } it { should contain_class('vanusers::JamieA') } end but when I try and run it there is a mysterious dependancy issue $ spec spec/defines/rule_spec.rb /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1:in `require': no such file to load -- rspec-puppet (LoadError) from /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1 from ./spec/defines/rule_spec.rb:1:in `require' from ./spec/defines/rule_spec.rb:1 from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load_files' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `each' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `load_files' from /usr/lib/ruby/1.8/spec/runner/options.rb:132:in `run_examples' from /usr/lib/ruby/1.8/spec/runner/command_line.rb:9:in `run' from /usr/bin/spec:3 Here is the solution I came up with in the end:: apt-get install rubygems gem install rspec-expectations rspec-puppet puppet-lint puppetlabs_spec_helper so your path picks up the gem stuff export PATH=/var/lib/gems/1.8/bin:$PATH cd into module and rm spec/spec_helper.rb rspec-puppet-init replace Rakefile with require 'rake' require 'rspec/core/rake_task' require 'puppetlabs_spec_helper/rake_tasks' Then "rake spec" to run tests or "rake lint" to check files http://sysadvent.blogspot.co.uk/2013/12/day-22-getting-started-testing-your.html was an excellent source of info

    Read the article

  • Why Oracle Delivers More Value than IBM in Data Integration Solutions

    - by irem.radzik(at)oracle.com
    For data integration projects, IT organization look for a robust but an easy-to-use solution, which simplifies enterprise data architecture while providing exceptional value-- not one that adds complexity and costs. This is a major challenge today for customers who are using IBM InfoSphere products like DataStage or Change Data Capture. Whereas, Oracle consistently delivers higher level value with its data integration products such as Oracle Data Integrator, Oracle GoldenGate. There are many differentiators for Oracle's Data Integration offering in comparison to IBM. Here are the top five: Lower cost of ownership Higher performance in both real-time and bulk data movement Ease of use and flexibility Reliability Complete, Open, and Integrated Middleware Offering Architectural differences between products contribute a great deal to these differences. First of all, Oracle's ETL architecture does not require a middle-tier transformation server, something IBM does require. Not only it costs more to manage an additional transformation server including energy costs, but it adds a performance bottleneck as well. In addition, IBM's data integration products are complex and often require lengthy professional services engagements to integrate. This translates to higher costs and delayed time to market. Then there's the reliability factor. Our customers choose Oracle GoldenGate over IBM's InfoSphere Change Data Capture product because Oracle GoldenGate is designed for mission-critical systems that require guaranteed data delivery and automatic recovery in case of process interruptions. On Thursday we will discuss these key differentiators in detail and provide customer examples that chose Oracle over IBM in data integration projects. Join us on Thursday Feb 10th at 11am PT to learn how Oracle delivers more value than IBM in data integration solutions.

    Read the article

  • Organising levels / rooms in a MUD-style text based world

    - by Polynomial
    I'm thinking of writing a small text-based adventure game, but I'm not particularly sure how I should design the world from a technical standpoint. My first thought is to do it in XML, designed something like the following. Apologies for the huge pile of XML, but I felt it important to fully explain what I'm doing. <level> <start> <!-- start in kitchen with empty inventory --> <room>Kitchen</room> <inventory></inventory> </start> <rooms> <room> <name>Kitchen</name> <description>A small kitchen that looks like it hasn't been used in a while. It has a table in the middle, and there are some cupboards. There is a door to the north, which leads to the garden.</description> <!-- IDs of the objects the room contains --> <objects> <object>Cupboards</object> <object>Knife</object> <object>Batteries</object> </objects> </room> <room> <name>Garden</name> <description>The garden is wild and full of prickly bushes. To the north there is a path, which leads into the trees. To the south there is a house.</description> <objects> </objects> </room> <room> <name>Woods</name> <description>The woods are quite dark, with little light bleeding in from the garden. It is eerily quiet.</description> <objects> <object>Trees01</object> </objects> </room> </rooms> <doors> <!-- a door isn't necessarily a door. each door has a type, i.e. "There is a <type> leading to..." from and to are references the rooms that this door joins. direction specifies the direction (N,S,E,W,Up,Down) from <from> to <to> --> <door> <type>door</type> <direction>N</direction> <from>Kitchen</from> <to>Garden</to> </door> <door> <type>path</type> <direction>N</direction> <from>Garden</type> <to>Woods</type> </door> </doors> <variables> <!-- variables set by actions --> <variable name="cupboard_open">0</variable> </variables> <objects> <!-- definitions for objects --> <object> <name>Trees01</name> <displayName>Trees</displayName> <actions> <!-- any actions not defined will show the default failure message --> <action> <command>EXAMINE</command> <message>The trees are tall and thick. There aren't any low branches, so it'd be difficult to climb them.</message> </action> </actions> </object> <object> <name>Cupboards</name> <displayName>Cupboards</displayName> <actions> <action> <!-- requirements make the command only work when they are met --> <requirements> <!-- equivilent of "if(cupboard_open == 1)" --> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>EXAMINE</command> <!-- fail message is the message displayed when the requirements aren't met --> <failMessage>The cupboard is closed.</failMessage> <message>The cupboard contains some batteires.</message> </action> <action> <requirements> <require operation="equal" value="0">cupboard_open</require> </requirements> <command>OPEN</command> <failMessage>The cupboard is already open.</failMessage> <message>You open the cupboard. It contains some batteries.</message> <!-- assigns is a list of operations performed on variables when the action succeeds --> <assigns> <assign operation="set" value="1">cupboard_open</assign> </assigns> </action> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>CLOSE</command> <failMessage>The cupboard is already closed.</failMessage> <message>You closed the cupboard./message> <assigns> <assign operation="set" value="0">cupboard_open</assign> </assigns> </action> </actions> </object> <object> <name>Batteries</name> <displayName>Batteries</displayName> <!-- by setting inventory to non-zero, we can put it in our bag --> <inventory>1</inventory> <actions> <action> <requirements> <require operation="equal" value="1">cupboard_open</require> </requirements> <command>GET</command> <!-- failMessage isn't required here, it'll just show the usual "You can't see any <blank>." message --> <message>You picked up the batteries.</message> </action> </actions> </object> </objects> </level> Obviously there'd need to be more to it than this. Interaction with people and enemies as well as death and completion are necessary additions. Since the XML is quite difficult to work with, I'd probably create some sort of world editor. I'd like to know if this method has any downfalls, and if there's a "better" or more standard way of doing it.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • Data base structure of a subscriber list

    - by foodil
    I am building a application that allow different user to store the subscriber information To store the subscriber information , the user first create a list For each list, there is a ListID. Subscriber may have different attribute : email phone fax .... For each list, their setting is different , so a require_attribute table is introduced. It is a bridge between subscriber and List That store Listid ,subid , attribute, datatype That means the system have a lot of list, each user have their own list, and the list have different attribute, some list have email , phone , some may have phone, address, name mail.. And the datatype is different, some may use 'name' as integer , some may use 'name' as varchar attribute means email phone, it is to define for which list have which subscriber attribute datatype means for each attribute, what is its datatype Table :subscriber : Field :subid , name,email Table :Require Attribute: Field : Listid ,subid , attribute, datatype The attribute here is {name, email} So a simple data is Subscriber: 1 , MYname, Myemail Require Attribute : Listid , 1 , 'email', 'intger' Listid , 1 , 'name', 'varchar' I found that this kind of storage is too complex too handle with, Since the subscriber is share to every body, so if a person want to change the datatype of name, it will also affect the data of the other user. Simple error situation: Subscriber: list1, Subscriber 1 , name1, email1 list2, Subscriber 2 , name2 , email2 Require Attribute : List1 , Subscriber 1 , 'email', 'varchar', List1 , Subscriber 1 , 'name', 'varchar', Listid , Subscriber 2 , 'email', 'varchar', Listid , Subscriber 2, 'name', 'integer', if user B change the data type of name in require attribute from varchar to integer, it cause a problem. becasue list 1 is own by user A , he want the datatype is varchar, but list 2 is won by user B , he want the datatype to be integer So how can i redesign the structure?

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • Problems with Widgets in dojox DataGrid

    - by Kitson
    I am trying to include some editing Widgets in my dojox.grid.DataGrid seem to be having a lot of difficulty. I have tried everything I can think of to get it to work, but something just isn't going right. When I started having problems, I tried to copy almost exactly from the grid tests and model my "breakout" of code just like that, but without success. Basic editing of the Grid seems to work. In the example below, the "Events" column allows edits, but the two columns that are using the cellType attribute don't work. In fact they also seem to ignore the other attributes (like the styles) which would seem to indicate that some sort of issue was run into, but there is nothing in FireBug. Also I get the same behaviour between Chrome and Firefox. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Insert title here</title> <link id="themeStyles" rel="stylesheet" href="javascript/dojotoolkit/dijit/themes/tundra/tundra.css"> <style type="text/css"> @import "css/gctilog.css"; @import "javascript/dojotoolkit/dojo/resources/dojo.css"; @import "javascript/dojotoolkit/dijit/themes/tundra/tundra.css"; @import "javascript/dojotoolkit/dojox/grid/resources/Grid.css"; @import "javascript/dojotoolkit/dojox/grid/resources/tundraGrid.css"; @import "javascript/dojotoolkit/ocp/resources/MultiStateCheckBox.css"; </style> <script type="text/javascript" src="javascript/dojotoolkit/dojo/dojo.js" djConfig="parseOnLoad:true, isDebug:true, locale:'en-gb'"></script> <script type="text/javascript"> dojo.require("dojo.currency"); dojo.require("dijit.dijit"); dojo.require("dijit.form.HorizontalSlider"); dojo.require("dojox.data.JsonRestStore"); dojo.require("dojox.grid.DataGrid"); dojo.require("dojox.layout.ExpandoPane"); dojo.require("dojox.timing"); dojo.require("ocp.MultiStateCheckBox"); dojo.require("dojo.parser"); formatCurrency = function(inDatum){ return isNaN(inDatum) ? '...' : dojo.currency.format(inDatum, this.constraint); } </script> <script type="text/javascript" src="javascript/formatter.js"></script> <script type="text/javascript" src="javascript/utilities.js"></script> </head> <body class="tundra"> <div name="labelCallids">Call IDs</div> <div dojoType="dojox.data.JsonRestStore" id="callidStore4" jsId="callidStore4" target="logmap/maps.php/maps/4/callids/" idAttribute="callid"></div> <table dojoType="dojox.grid.DataGrid" id="callidGrid4" store="callidStore4" query="{ callid: '*' }" style="width: 950px; border: 1px solid rgb(0,156,221); margin-left: 15px;" clientSort="false" autoHeight="10" noDataMessage="No Call IDs Available..."> <thead> <tr> <th field="callid" width="375px">Call ID</th> <th cellType="dojox.grid.cells.ComboBox" field="type" options="SIP,TLib" editable="true" width="10em" styles='text-align: center;'>Type</th> <th field="event_count" width="40px" editable="true" styles="text-align: right;">Events</th> <th field="start_ts" width="75px" formatter="secToHourMinSecMS">Start</th> <th field="end_ts" width="75px" formatter="secToHourMinSecMS">End</th> <th field="duration" width="75px" formatter="secToHourMinSecMS">Duration</th> <th cellType="dojox.grid.cells._Widget" widgetClass="dijit.form.HorizontalSlider" field="include" formatter="formatCurrency" constraint="{currency:'EUR'}" editable="true" width="10em" styles='text-align: right;'>Amount</th> </tr> </thead> </table> </body> </html> Is there anything that I am missing. It would seem to be fundamental, but I just can't seem to see it. [EDIT] What I have done instead is return a dijit Widget using the formatter to return a widget. So in the declarative model, I specify something like this: <th field="type" formatter="getMultiField" width="10em" styles='text-align: center;'>Type</th> And then I wrote a JavaScript function like the below to return the widget I wanted. function getMultiField(value) { var jsonValue = JSON.parse(value); //I provide the value of the widget as JSON //from my data store, so I need to parse it var control = new ocp.MultiStateCheckBox({ //my custom widget id : "dMSCB"+(new Date).getTime()+Math.ceil(Math.random()*100000), //generate a unique ID value : jsonValue.value, onChange : function (value {...}) //code to manipulate the underlying data store }); return control; //The dojo 1.4 grid can handle a returned Widget }

    Read the article

  • high tweet status IDs causing failed to open stream errors?

    - by escarp
    Erg. Starting in the past few days high tweet IDs (at least, it appears it's ID related, but I suppose it could be some recent change in the api returns) are breaking my code. At first I tried passing the ID as a string instead of an integer to this function, and I thought this worked, but in reality it was just the process of uploading the file from my end. In short, a php script generates these function calls, and when it does so, they fail. If I download the php file the call is generated into, delete the server copy and re-upload the exact same file without changing it, it works fine. Does anyone know what could be causing this behavior? Below is what I suspect to be the most important part of the individual files that are pulling the errors. Each of the files is named for a status ID (e.g. the below file is named 12058543656.php) <?php require "singlePost.php"; SinglePost(12058543656) ?> Here's the code that writes the above files: $postFileName = $single_post_id.".php"; if(!file_exists($postFileName)){ $created_at_full = date("l, F jS, Y", strtotime($postRow[postdate])-(18000)); $postFileHandle = fopen($postFileName, 'w+'); fwrite($postFileHandle, '<html> <head> <title><?php $thisTITLE = "escarp | A brief poem or short story by '.$authorname.' on '.$created_at_full.'"; echo $thisTITLE;?></title><META NAME="Description" CONTENT="This brief poem or short story, by '.$authorname.', was published on '.$created_at_full.'"> <?php include("head.php");?> To receive other poems or short stories like this one from <a href=http://twitter.com/escarp>escarp</a> on your cellphone, <a href=http://twitter.com/signup>create</a> and/or <a href=http://twitter.com/devices>associate</a> a Twitter account with your cellphone</a>, follow <a href=http://twitter.com/escarp>us</a>, and turn device updates on. <pre><?php require "singlePost.php"; SinglePost("'.$single_post_id.'") ?> </div></div></pre><?php include("foot.php");?> </body> </html>'); fclose($postFileHandle);} $postcounter++; } I can post more if you don't see anything here, but there are several files involved and I'm trying to avoid dumping tons of irrelevant code. Error: Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include(head.php) [function.include]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 4 Warning: include() [function.include]: Failed opening 'head.php' for inclusion (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 4 To receive other poems or short stories like this one from escarp on your cellphone, create and/or associate a Twitter account with your cellphone, follow us, and turn device updates on. Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Warning: require(singlePost.php) [function.require]: failed to open stream: No such file or directory in /f2/escarp/public/12177797583.php on line 7 Fatal error: require() [function.require]: Failed opening required 'singlePost.php' (include_path='.:/nfsn/apps/php5/lib/php/:/nfsn/apps/php/lib/php/') in /f2/escarp/public/12177797583.php on line 7 <?php function SinglePost($statusID) { require "nicetime.php"; $db = sqlite_open("db.escarp"); $updates = sqlite_query($db, "SELECT * FROM posts WHERE postID = '$statusID'"); $row = sqlite_fetch_array($updates, SQLITE_ASSOC); $id = $row[authorID]; $result = sqlite_query($db, "SELECT * FROM authors WHERE authorID = '$id'"); $row5 = sqlite_fetch_array($result, SQLITE_ASSOC); $created_at_full = date("l, F jS, Y", strtotime($row[postdate])-(18000)); $created_at = nicetime($row[postdate]); if($row5[url]==""){ $authorurl = ''; } else{ /*I'm omitting a few pages of output code and associated regex*/ return; } ?>

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • Having trouble getting cucumber 6.3 to run on rails 2.3.4

    - by Yak
    Hi, I am trying to to get cucumber to run with no luck. Here is the error I am seeing: cucumber features Using the default profile... no such file to load -- test/ (MissingSourceFile) /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:158:in require' /Users/yakovrabinovich/Starstreet/starstreet/vendor/gems/cucumber-0.6.3/bin/../lib/cucumber/rails/world.rb:11 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:158:in require' /Library/Ruby/Gems/1.8/gems/cucumber-rails-0.3.0/lib/cucumber/rails/rspec.rb:1 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:158:in require' /Users/yakovrabinovich/Starstreet/starstreet/features/support/env.rb:11 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in polyglot_original_require' /Library/Ruby/Gems/1.8/gems/polyglot-0.3.0/lib/polyglot.rb:65:in require' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/rb_support/rb_language.rb:124:in load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:85:in load_code_file' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:77:in load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in each' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/step_mother.rb:76:in load_code_files' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:48:in execute!' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/../lib/cucumber/cli/main.rb:20:in execute' /Library/Ruby/Gems/1.8/gems/cucumber-0.6.3/bin/cucumber:8 /usr/bin/cucumber:19:in `load' /usr/bin/cucumber:19 Here are my gems: Yakov-Rabinovichs-MacBook:1.8 yakovrabinovich$ gem list * LOCAL GEMS * aasm (2.1.3) acl9 (0.11.0) actionmailer (2.3.4, 2.2.2, 1.3.6) actionpack (2.3.4, 2.2.2, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.4, 2.2.2, 1.15.6) activeresource (2.3.4, 2.2.2) activesupport (2.3.4, 2.2.2, 1.4.4) acts_as_ferret (0.4.3) authlogic (2.1.3) bgetting-hominid (1.2.0) builder (2.1.2) capistrano (2.5.2) capistrano-ext (1.2.1) cgi_multipart_eof_fix (2.5.0) chronic (0.2.3) columnize (0.3.1) configatron (2.5.1) cucumber (0.6.3) cucumber-rails (0.3.0) daemons (1.0.10) database_cleaner (0.5.0) diff-lcs (1.1.2) dnssd (0.6.0) factory_girl (1.2.3) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.6) gem_plugin (0.2.3) gemcutter (0.4.1) highline (1.5.0) hoe (2.5.0) hominid (2.1.0) hpricot (0.6.164) json (1.2.0) json_pure (1.2.0) libxml-ruby (1.1.2) linecache (0.43) mocha (0.9.8) mongrel (1.1.5) needle (1.3.0) net-scp (1.0.1) net-sftp (2.0.1, 1.1.1) net-ssh (2.0.16, 2.0.4, 1.1.4) net-ssh-gateway (1.0.0) nokogiri (1.4.1) oauth (0.3.6) pg (0.8.0) polyglot (0.3.0) rack (1.0.1) rack-test (0.5.3) rails (2.3.4, 2.2.2, 1.2.6) rake (0.8.7, 0.8.3) RedCloth (4.1.1) rspec (1.3.0) rspec-rails (1.3.2) ruby-debug (0.10.3) ruby-debug-base (0.10.3) ruby-hmac (0.4.0) ruby-openid (2.1.2) ruby-yadis (0.3.4) rubyforge (2.0.3) rubygems-update (1.3.5) rubynode (0.1.5) sqlite3-ruby (1.2.4) term-ansicolor (1.0.4) termios (0.9.4) test-unit (1.2.3) thoughtbot-factory_girl (1.2.2) thoughtbot-shoulda (2.10.2) treetop (1.4.4) whenever (0.4.1) will_paginate (2.3.11) xmpp4r (0.4) yamler (0.1.0) Any help would be greatly appreciated!

    Read the article

  • DataMapper Dates

    - by Ethan Turkeltaub
    Forgive me if this is a simple answer. But how do you get a Date from a DataMapper property. For example: require 'rubygems' require 'sinatra' require 'datamapper' class Test include DataMapper::Resource property :id, Serial property :created_at, Date end get '/:id' do test = Test.get(1) test.created_at = ? end

    Read the article

  • Deploying Sinatra app on Dreamhost/Passenger with custom gems

    - by darkism
    I've got a Sinatra app that I'm trying to run on Dreamhost that makes use of pony to send email. In order to get the application up and running at the very beginning (before adding pony), I had to gem unpack rack and gem unpack sinatra into the vendor/ directory, so this was my config.ru: require 'vendor/rack/lib/rack' require 'vendor/sinatra/lib/sinatra' set :run, false set :environment, :production set :views, "views" require 'public/myapp.rb' run Sinatra::Application I have already done gem install pony and gem unpack pony (into vendor/). Afterwards, I tried adding require 'vendor/sinatra/lib/pony' to config.ru only to have Passenger complain about pony's dependencies (mime-types, tmail) not being found either! There has to be a better way to use other gems and tone down those long, ugly, redundant requires. Any thoughts?

    Read the article

  • gmaps4rails version 2 build method

    - by BrainLikeADullPencil
    I installed gmaps4rails for my Rails app, ran the generator, and required the two files like this in my application.js file, along with underscore.js //= require underscore //= require gmaps4rails/gmaps4rails.base //= require gmaps4rails/gmaps4rails.googlemaps adding, as instructed on github https://github.com/apneadiving/Google-Maps-for-Rails, these dependencies in layout.html.erb When I tried to create the demo map from the github page with this code, I got an error that the object doesn't have a build method. Uncaught TypeError: Object # has no method 'build' handler = Gmaps.build('Google'); handler.buildMap({ provider: {}, internal: {id: 'map'}}, function(){ markers = handler.addMarkers([ { "lat": 0, "lng": 0, "picture": { "url": "https://addons.cdn.mozilla.net/img/uploads/addon_icons/13/13028-64.png", "width": 36, "height": 36 }, "infowindow": "hello!" } ]); handler.bounds.extendWith(markers); handler.fitMapToBounds(); }); Indeed, when I look inside the base file that I require in the manifest file there is no build method for that object. How does one create a map in the new version for gmaps4rails?

    Read the article

  • Force ruby to use dbi Gem instead of dbi in site_ruby

    - by sutch
    I'm using: Windows 7 Ruby 1.8.6 One-Click Installer DBI version 0.4.3 installed using RubyGems What I see when executing these commands: C:ruby -v ruby 1.8.6 (2008-08-11 patchlevel 287) [i386-mswin32] C:gem -v 1.3.1 C:ruby -r rubygems -r dbi -e "puts DBI::VERSION" 0.2.2 C:gem list dbi *** LOCAL GEMS *** dbi (0.4.3) Why do ruby scripts use the DBI installed in site_ruby rather than the DBI installed with RubyGems? Updated to respond to Luis Lavena's answer... Here's what happened when I attempted what you suggest: C:ruby -r rubygems -e "require 'rubygems'; puts DBI::VERSION" -e:1: uninitialized constant DBI (NameError) And when I updated to require DBI: C:ruby -r rubygems -e "require 'rubygems' ; require 'dbi' ; puts DBI::VERSION" 0.2.2 Why wouldn't RubyGems override the built-in library?

    Read the article

  • "no such file to load -- treetop/runtime" running "rake jobs:work"

    - by Ryan Marshall
    when i try and run the "rails server" or "rake jobs:work" i get the error: "no such file to load -- treetop/runtime" full trace: macbook-pro-2:domain ryan$ rake jobs:work --trace(in /Applications/htdocs/domain) rake aborted! no such file to load -- treetop/runtime /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:68:in require' /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:68 /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:61:ineach' /opt/local/lib/ruby/gems/1.8/gems/mail-2.2.14/lib/mail.rb:61 /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed/performable_mailer.rb:1:in require' /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed/performable_mailer.rb:1 /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed_job.rb:5:inrequire' /opt/local/lib/ruby/gems/1.8/gems/delayed_job-2.1.2/lib/delayed_job.rb:5 /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:in require' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:64:inrequire' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:in each' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:62:inrequire' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:in each' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler/runtime.rb:51:inrequire' /opt/local/lib/ruby/gems/1.8/gems/bundler-1.0.7/lib/bundler.rb:112:in require' /ApApplications/htdocs/domain/config/application.rb:7 /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:ingem_original_require' /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in require' /Applications/htdocs/domain/Rakefile:4 /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:inload' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in raw_load_rakefile' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2017:inload_rakefile' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:inload_rakefile' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2000:in run' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /opt/local/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /opt/local/bin/rake:19:inload' /opt/local/bin/rake:19 in my Gemfile i have: "gem 'delayed_job'"

    Read the article

  • Why do I get "undefined method `destroy'" when including 'svn/repos'?

    - by Chad Johnson
    I get the following when running script/server require 'svn/repos' script/console Loading development environment (Rails 2.3.5) require 'svn/repos' /Library/Ruby/Site/1.8/svn/core.rb:88: warning: already initialized constant Stream /Library/Ruby/Site/1.8/svn/core.rb:138: warning: already initialized constant AuthBaton NoMethodError: undefined method destroy' for #<Svn::Ext::Core::Apr_pool_wrapper_t:0x10150ae68> from /Library/Ruby/Site/1.8/svn/util.rb:60:insvn_fs_initialize' from /Library/Ruby/Site/1.8/svn/util.rb:60:in call' from /Library/Ruby/Site/1.8/svn/util.rb:60:ininitialize' from /Library/Ruby/Site/1.8/svn/fs.rb:14 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:inrequire' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:innew_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in require' from /Library/Ruby/Site/1.8/svn/repos.rb:5 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:ingem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in require' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:inrequire' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:inrequire' from (irb):1 Any idea why?

    Read the article

  • How do I start IRB console from a rake task?

    - by Michael Lang
    I'm trying to write a rake task that will set up an environment mirroring my project. task :environment do require 'rubygems' require 'sequel' # require 'my_projects_special_files' end task :foo => [:environment] do require 'irb' IRB.start end Leads to irb complaining that "foo" doesn't exist (the name of the task) 10:28:01:irb_test rake foo --trace (in /Users/mwlang/projects/personal/rake/irb_test) ** Invoke foo (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute foo rake aborted! No such file or directory - foo /opt/local/lib/ruby/1.8/irb/input-method.rb:68:in `initialize' /opt/local/lib/ruby/1.8/irb/input-method.rb:68:in `open' /opt/local/lib/ruby/1.8/irb/input-method.rb:68:in `initialize' /opt/local/lib/ruby/1.8/irb/context.rb:80:in `new' /opt/local/lib/ruby/1.8/irb/context.rb:80:in `initialize' /opt/local/lib/ruby/1.8/irb.rb:92:in `new' /opt/local/lib/ruby/1.8/irb.rb:92:in `initialize' /opt/local/lib/ruby/1.8/irb.rb:57:in `new' /opt/local/lib/ruby/1.8/irb.rb:57:in `start' /Users/mwlang/projects/personal/rake/irb_test/Rakefile:9

    Read the article

  • How can I capture Rake output when invoked from with a Ruby script?

    - by Adrian O'Connor
    I am writing a web-based dev-console for Rails development. In one of my controller actions, I am calling Rake, but I am unable to capture any of the output that Rake generates. For example, here is some sample code, from the controller: require 'rake' require 'rake/rdoctask' require 'rake/testtask' require 'tasks/rails' require 'stringio' ... def show_routes @results = capture_stdout { Rake.tasks['routes'].invoke } # @results is nil -- the capture_stdout doesn't catpure anything that Rake generates end def capture_stdout s = StringIO.new $stdout = s yield s.string ensure $stdout = STDOUT end Does anybody know why I can't capture the Rake output? I've tried going through the Rake source, and I can't see where it fires a new process or anything, so I think I ought to be able to do this. Many thanks! Adrian I have since discovered the correct way to call Rake from inside Ruby that works much better: Rake.application['db:migrate:redo'].reenable Rake.application['db:migrate:redo'].invoke Strangely, some rake tasks work perfectly now (routes), some capture the output the first time the run and after that are always blank (db:migrate:redo) and some don't seem to ever capture output (test). Odd.

    Read the article

  • Sqs vs SqsGen2 using RightScale right_aws GEM

    - by Fitter Man
    I'm trying to use the right_aws (1.10.0) GEM with Rails, and I've reduced my problem to a 3-line irb session. The following works require 'rubygems' require 'right_aws' sqs = RightAws::Sqs.new("xxxxxxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") while this fails require 'rubygems' require 'right_aws' sqs = RightAws::SqsGen2.new("xxxxxxxxxxxxxxx", "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") with NameError: uninitialized constant RightAws::SqsGen2. I see the class definition in the GEM source, the documentation is old but seems accurate, but I can't figure out what I'm doing wrong. And while you're at it, is there any reason if I'm building something new, I'd want to use the older interface?

    Read the article

  • rubyCAS-client serviceValidation uri questions.

    - by ted-gehling
    php code works with this url but the rubyCAS-Client gem's `validate_service_ticket()' seems to call an SSL Validation on this url which returns an error message. 'OpenSSL::SSL::SSLError in CassersController#index' ||'SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed' ---enviroment.rb--- require 'casclient' require 'casclient/frameworks/rails/filter' CASClient::Frameworks::Rails::Filter.configure( :cas_base_url => "https://auth.foo.com", :validate_url => "https://auth.foo.com/serviceValidate" ) ---casser_controller.rb--- class CassersController < ApplicationController require 'casclient' require 'casclient/frameworks/rails/filter' before_filter CASClient::Frameworks::Rails::Filter def index @username = session[:cas_user] end end Possibly just another requirement I need to make or a config file that needs changed, but any help about this error would be appreciated.

    Read the article

  • Sinatra and Markaby: "Template Engine Not Found"

    - by marienbad
    Using the standard gem for using Markaby with Sinatra, listed at http://www.sinatrarb.com/extensions-wild.html sudo gem install markaby Password: Successfully installed builder-2.1.2 Successfully installed markaby-0.5 2 gems installed sudo gem install sbfaulkner-sinatra-markaby -s http://gems.github.com Password: Successfully installed sbfaulkner-sinatra-markaby-0.9.2.2 1 gem installed require 'rubygems' require 'sinatra' require 'sinatra/markaby' get '/' do markaby :template end RuntimeError at / Template engine not found: mab

    Read the article

  • How can I tell if AUCTeX is available?

    - by Simon Wright
    I have a package which has various features that depend on AUCTeX. As it stands, it requires hand-configuration: (defvar AucTeX-used nil) (if AucTeX-used (progn (require 'tex-site) (require 'latex)) (require 'latex-mode) (setq TeX-command-list nil)) Is there a way to find out whether AUCTeX is available on the machine, to avoid having to set AucTeX-Used by hand? (I'm using GNU Emacs 23.1.1 for Max OS X).

    Read the article

  • How to create a new widget for dojox.grid.cells.dijit?

    - by the_drow
    I am trying to create a button widget for dojox.grid. My problems are: 1) The button is only shown when I double click the grid. 2) I can't figure out how to set attributes through declarative markup. It seems that the markupFactory function is responsible for it but it doesn't set the widget's label. The following code demonstrates what I've got so far: dojo.require("dojox.grid.DataGrid"); dojo.require("dojo.data.ItemFileWriteStore"); dojo.require("dijit.form.Button"); dojo.require("dojox.grid.cells.dijit"); dojo.require("dojo.parser"); dojo.declare("dojox.grid.cells.Button", dojox.grid.cells._Widget, { widgetClass: dijit.form.Button, alwaysEditing: true, constructor: function(inCell) { this.inherited(arguments); this.widget = new dijit.form.Button; }, setValue: function(inRowIndex, inValue){ if (this.widget) { this.widget.attr('value', inValue); } else { this.inherited(arguments); } } }); dojox.grid.cells.Button.markupFactory = function(node, cell) { dojox.grid.cells._Widget.markupFactory(node, cell); }

    Read the article

  • Orchard - Can't find the Resource defined in ResourceManifest.cs

    - by mlang
    I have a custom there, where I try to require some of my css and js files via the ResourceManifest.cs file - I keep into running a quite weird issue tough. I get the following error: a 'script' named 'FoundationScript' could not be found This is my ResourceManifest.cs: using Orchard.UI.Resources; namespace Themes.TestTheme { public class ResourceManifest : IResourceManifestProvider { public void BuildManifest(ResourceManifestBuilder builder) { var manifest = builder.Add(); manifest.DefineStyle("Foundation").SetUrl("foundation.min.css"); manifest.DefineScript("FoundationScript").SetUrl("foundation.min.js"); } } } In the Layout.cshtml, I have following: @{ Script.Require("ShapesBase"); Script.Require("FoundationScript"); Style.Include("site.css"); Style.Require("Foundation"); } What am I missing here?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >