Search Results

Search found 27151 results on 1087 pages for 'end'.

Page 86/1087 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Mechanize form submit not going to the correct response page, no errors as to why. Something I'm doing?

    - by Zack Shapiro
    I threw this all in one controller for testing purposes. My code fills out the form correctly for adding a new address to your Amazon account. There are two buttons that submit this form, one takes you to add a new address which is what I don't want, and the other is just a Save & Continue input/image. When I submit the form via that button, as I do below, the form is still on the page, filled out as I have with my code. Inspecting the page titles, they're the same. There are no discernible errors that Mechanize or Amazon spit back. Any ideas? class AmazonCrawler def initialize @agent = Mechanize.new do |agent| agent.user_agent_alias = 'Mac Safari' agent.follow_meta_refresh = true agent.redirect_ok = true end end def login # login_url = "https://www.amazon.com/ap/signin?_encoding=UTF8&openid.assoc_handle=usflex&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.mode=checkid_setup&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.max_auth_age=0&openid.return_to=https%3A%2F%2Fwww.amazon.com%2Fgp%2Fcss%2Fhomepage.html%3Fie%3DUTF8%26ref_%3Dgno_yam_ya" login_url = "https://www.amazon.com/gp/css/account/address/view.html?ie=UTF8&ref_=ya_add_address&viewID=newAddress" @agent.get(login_url) form = @agent.page.forms.first form.email = "[email protected]" form.radiobuttons.first.value = "0" form.radiobuttons.last.check form.password = "my_password" dashboard = @agent.submit(form) end end class UsersController < ApplicationController def index response = AmazonCrawler.new.login form = response.forms[1] # fill out form form.enterAddressFullName == "Your Name" form.enterAddressAddressLine1 = "123 Main Street" form.enterAddressAddressLine2 = "Apartment 34" form.enterAddressCity = "San Francisco" form.enterAddressStateOrRegion = "CA" form.enterAddressPostalCode = "94101" form.enterAddressPhoneNumber = "415-555-1212" form.AddressType = "RES" new_response = form.submit( form.button_with(value: /Save.*Continue/) ) end end

    Read the article

  • Help with HTTP Intercepting Proxy in Ruby?

    - by Philip
    I have the beginnings of an HTTP Intercepting Proxy written in Ruby: require 'socket' # Get sockets from stdlib server = TCPServer.open(8080) # Socket to listen on port 8080 loop { # Servers run forever Thread.start(server.accept) do |client| puts "** Got connection!" @output = "" @host = "" @port = 80 while line = client.gets line.chomp! if (line =~ /^(GET|CONNECT) .*(\.com|\.net):(.*) (HTTP\/1.1|HTTP\/1.0)$/) @port = $3 elsif (line =~ /^Host: (.*)$/ && @host == "") @host = $1 end print line + "\n" @output += line + "\n" # This *may* cause problems with not getting full requests, # but without this, the loop never returns. break if line == "" end if (@host != "") puts "** Got host! (#{@host}:#{@port})" out = TCPSocket.open(@host, @port) puts "** Got destination!" out.print(@output) while line = out.gets line.chomp! if (line =~ /^<proxyinfo>.*<\/proxyinfo>$/) # Logic is done here. end print line + "\n" client.print(line + "\n") end out.close end client.close end } This simple proxy that I made parses the destination out of the HTTP request, then reads the HTTP response and performs logic based on special HTML tags. The proxy works for the most part, but seems to have trouble dealing with binary data and HTTPS connections. How can I fix these problems?

    Read the article

  • Rails: Duplicate functionality across controllers? A humble plea.

    - by Alex
    So I'm working with authlogic, and I'm trying to duplicate the login functionality to the welcome page, so that you can log in by restful url or by just going to the main page. No, I don't know if we'll keep that feature, but I want to test it out anyway. Here's the error message: RuntimeError in Welcome#index Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id The code is below. Basically, what's happening is the index view (the first code snippet) is sending the information from the form to the create method of user_sessions controller. At this point, in theory, it create should just pick up, but it doesn't. PLEASE help. Please. I've been doing this for about 8 hours. I checked Google. I checked IRC. I checked every book I could find. You don't even have to answer, I can to the grunt work if you just point me in the right direction. <% form_for @user_session, :url => user_sessions_path do |f| %> <%= f.text_field :email %><br /> <%= f.password_field :password %> <%= submit_tag 'Login' %> <% end %> class ApplicationController < ActionController::Base helper :all # include all helpers, all the time protect_from_forgery # See ActionController::RequestForgeryProtection for details # Scrub sensitive parameters from your log # filter_parameter_logging :password helper_method :current_user_session, :current_user before_filter :new_session_object protected def new_session_object unless current_user @user_session = UserSession.new(params[:user_session]) end end private def current_user_session return @current_user_session if defined?(@current_user_session) @current_user_session = UserSession.find end def current_user return @current_user if defined?(@current_user) @current_user = current_user_session && current_user_session.record end end

    Read the article

  • Function to extract data in insert into satement for a table.

    - by user269484
    Hi...I m using this Function to extract the data but unable to extract LONG datatype. Can anyone help me? create or replace Function ExtractData(v_table_name varchar2) return varchar2 As b_found boolean:=false; v_tempa varchar2(8000); v_tempb varchar2(8000); v_tempc varchar2(255); begin for tab_rec in (select table_name from user_tables where table_name=upper(v_table_name)) loop b_found:=true; v_tempa:='select ''insert into '||tab_rec.table_name||' ('; for col_rec in (select * from user_tab_columns where table_name=tab_rec.table_name order by column_id) loop if col_rec.column_id=1 then v_tempa:=v_tempa||'''||chr(10)||'''; else v_tempa:=v_tempa||',''||chr(10)||'''; v_tempb:=v_tempb||',''||chr(10)||'''; end if; v_tempa:=v_tempa||col_rec.column_name; if instr(col_rec.data_type,'CHAR') 0 then v_tempc:='''''''''||'||col_rec.column_name||'||'''''''''; elsif instr(col_rec.data_type,'DATE') 0 then v_tempc:='''to_date(''''''||to_char('||col_rec.column_name||',''mm/dd/yyyy hh24:mi'')||'''''',''''mm/dd/yyyy hh24:mi'''')'''; else v_tempc:=col_rec.column_name; end if; v_tempb:=v_tempb||'''||decode('||col_rec.column_name||',Null,''Null'','||v_tempc||')||'''; end loop; v_tempa:=v_tempa||') values ('||v_tempb||');'' from '||tab_rec.table_name||';'; end loop; if Not b_found then v_tempa:='-- Table '||v_table_name||' not found'; else v_tempa:=v_tempa||chr(10)||'select ''-- commit;'' from dual;'; end if; return v_tempa; end; /

    Read the article

  • C++: Trouble with dependent types in templates

    - by Rosarch
    I'm having trouble with templates and dependent types: namespace Utils { void PrintLine(const string& line, int tabLevel = 0); string getTabs(int tabLevel); template<class result_t, class Predicate> set<result_t> findAll_if(typename set<result_t>::iterator begin, set<result_t>::iterator end, Predicate pred) // warning C4346 { set<result_t> result; return findAll_if_rec(begin, end, pred, result); } } namespace detail { template<class result_t, class Predicate> set<result_t> findAll_if_rec(set<result_t>::iterator begin, set<result_t>::iterator end, Predicate pred, set<result_t> result) { typename set<result_t>::iterator nextResultElem = find_if(begin, end, pred); if (nextResultElem == end) { return result; } result.add(*nextResultElem); return findAll_if_rec(++nextResultElem, end, pred, result); } } Compiler complaints, from the location noted above: warning C4346: 'std::set<result_t>::iterator' : dependent name is not a type. prefix with 'typename' to indicate a type error C2061: syntax error : identifier 'iterator' What am I doing wrong?

    Read the article

  • How do you create a generic method in a class?

    - by Seth Spearman
    Hello, I am really trying to follow the DRY principle. I have a sub that looks like this? Private Sub DoSupplyModel OutputLine("ITEM SUMMARIES") Dim ItemSumms As New SupplyModel.ItemSummaries(_currentSupplyModel, _excelRows) ItemSumms.FillRows() OutputLine("") OutputLine("NUMBERED INVENTORIES") Dim numInvs As New SupplyModel.NumberedInventories(_currentSupplyModel, _excelRows) numInvs.FillRows() OutputLine("") End Sub I would like to collapse these into a single method using generics. For the record, ItemSummaries and NumberedInventories are both derived from the same base class DataBuilderBase. I can't figure out the syntax that will allow me to do ItemSumms.FillRows and numInvs.FillRows in the method. FillRows is declared as Public Overridable Sub FillRows in the base class. Thanks in advance. EDIT Here is my end result Private Sub DoSupplyModels() DoSupplyModelType("ITEM SUMMARIES",New DataBlocks(_currentSupplyModel,_excelRows) DoSupplyModelType("DATA BLOCKS",New DataBlocks(_currentSupplyModel,_excelRows) End Sub Private Sub DoSupplyModelType(ByVal outputDescription As String, ByVal type As DataBuilderBase) OutputLine(outputDescription) type.FillRows() OutputLine("") End Sub But to answer my own question...I could have done this... Private Sub DoSupplyModels() DoSupplyModelType(Of Projections)("ITEM SUMMARIES") DoSupplyModelType(Of DataBlocks)("DATA BLOCKS") End Sub Private Sub DoSupplyModelType(Of T as DataBuilderBase)(ByVal outputDescription As String, ByVal type As T) OutputLine(outputDescription) dim type as New T(_currentSupplyModel,_excelRows) type.FillRows() OutputLine("") End Sub Is that right? Does the New T() work? Seth

    Read the article

  • Properly obsoleting old members in an XML Serializable class in C# VB .NET

    - by George
    Hi! Some time ago I defined a class that was serialized using XML. That class contained a serializable propertyA of integer type. Now I have extended and updated this class, whereby a new propertyB was added, whose type is another class that also has several serializable properties. The new propertyB is now supposed to play the role of propertyA, that is since type of propertyB is another class, one of its members would contain the value that previously propertyA contained, thus making peroptyA obsolete. What I am trying to figure out is how do I make sure that when I desireliaze the OLD version of this class (without propertyB in it), I make sure that the desreializer would take the value of propertyA from the old calss and set it as a value of one of the members of propertyB in a new class? Private WithEvents _Position As Position = New Position(Alignment.MiddleMiddle, 0, True, 0, True) Public Property Position() As Position 'NEW composite property that holds the value of the obsolted property, i.e. Alignment Get Return _Position End Get Set(ByVal value As Position) _Position = value End Set End Property Private _Alignment As Alignment = Alignment.MiddleMiddle <Xml.Serialization.XmlIgnore(), Obsolete("Use Position property instead.")> _ Public Property Alignment() As Alignment'The old, obsoleted property that I guess must be left for compliance with deserializing the old version of this class Get Return _Alignment End Get Set(ByVal value As Alignment) _Alignment = value End Set End Property Can you help me, please?

    Read the article

  • Delphi static method of a class returning property value

    - by mitko.berbatov
    I'm making a Delphi VCL application. There is a class TStudent where I have two static functions: one which returns last name from an array of TStudent and another one which returns the first name of the student. Their code is something like: class function TStudent.FirstNameOf(aLastName: string): string; var i : integer; begin for i := 0 to Length(studentsArray) - 1 do begin if studentsArray[i].LastName = aLastName then begin result := studentsArray[i].FirstName; Exit; end; end; result := 'no match was found'; end; class function TStudent.LastNameOf(aFirstName: string): string; var i : integer; begin for i := 0 to Length(studentsArray) - 1 do begin if studentsArray[i].FirstName = aFirstName then begin result := studentsArray[i].LastName; Exit; end; end; result := 'no match was found'; end; My question is how can I avoid writing almost same code twice. Is there any way to pass the property as parameter of the functions.

    Read the article

  • Comparing date range quarters sql server

    - by CR41G14
    I have a policies in a system PolRef Start End POL123 22/11/2012 23/12/2014 POL212 24/09/2012 23/10/2012 POL214 23/08/2012 29/09/2012 I am asking a user for a reporting date, the user enters 24/10/2012 this becomes @StartDate From this I derive what the quarter is by the month: set @currentMonth = Month(@StartDate) if @currentMonth = 1 or @currentMonth = 2 or @currentMonth = 3 begin set @startmonth = 1 set @endmonth = 3 end if @currentMonth = 4 or @currentMonth = 5 or @currentMonth = 6 begin set @startmonth = 4 set @endmonth = 6 end if @currentMonth = 7 or @currentMonth = 8 or @currentMonth = 9 begin set @startmonth = 7 set @endmonth = 9 end if @currentMonth = 10 or @currentMonth = 11 or @currentMonth = 12 begin set @startmonth = 10 set @endmonth = 12 end I then get a date range: @quarterStartDate = CAST(CAST(YEAR(@StartDate) AS varchar) + '-' + CAST(@startMonth AS varchar) + '-' + '01') AS Date) @quarterEndDate = CAST(CAST(YEAR(@EcdDate) AS varchar) + '-' + CAST(@endMonth AS varchar) + '-' + '31') AS Date) This will give me 01-10-2012 and 31-12-2012. Basically I need a script to only bring back the policies that are in this quarter. The policy doesn't have to span the entire quarter date range, just exist in the quarter date range. The results expected would be PolRef Start End POL123 22/11/2012 23/12/2014 POL212 24/09/2012 23/10/2012 Pol123 appears because it spans over the quarterly date range. Pol212 is there because it expires in that quarter date range. Pol214 does not appear because it neither spans, expires or starts in this quarter. Any help would be greatly appreciated

    Read the article

  • undefined method `model_name' for NilClass:Class - Rails application

    - by user1270259
    So I have seen other articles here on stack about this and a lot of the time people are not doing @post = post.new. I read some where to use the plural...?? any ways I am getting this error on my discussion code: Discussion Controller class DiscussionsController < ApplicationController def index @discussion = Discussion.new @discussions = Discussion.all end def create @discussion = Discussion.create(params[:discussion]) if @discussion.save redirect_to tasks_path, :flash => {:success => 'Created a new discussion'} else redirect_to tasks_path, :flash => {:error => 'Failed to create a discussion'} end end end Discussion Form <%= form_for @discussion do |f| %> <p><%= f.label :title %> <%= f.text_field :title %></p> <p><%= f.label :content %> <%= f.text_area :content %></p> <% end %> Discussion Routes resources :discussions do resources :comments end Now as far as I know I am doing this right, because I have a task form set up essentially the same way - but I have looked at my code for hours and have googled and tried other examples and now i see this: undefined method `model_name' for NilClass:Class Extracted source (around line #1): 1: <%= form_for @discussion do |f| %> 2: 3: <p><%= f.label :title %> 4: <%= f.text_field :title %></p> Which should mean that I am missing something from my controller.....is it as asilly as a spelling mistake? .

    Read the article

  • Rails Nested Attributes Doesn't Insert ID Correctly

    - by MunkiPhD
    I'm attempting to edit a model's nested attributes, much as outline here, replicated here: <%= form_for @person do |person_form| %> <%= person_form.text_field :name %> <% for address in @person.addresses %> <%= person_form.fields_for address, :index => address do |address_form|%> <%= address_form.text_field :city %> <% end %> <% end %> <% end %> In my code, I have the following: <%= form_for(@meal) do |f| %> <!-- some other stuff that's irrelevant... --> <% for subitem in @meal.meal_line_items %> <%= f.fields_for subitem, :index => subitem do |line_item_form| %> <%= line_item_form.label :servings %><br/> <%= line_item_form.text_field :servings %><br/> <%= line_item_form.label :food_id %><br/> <%= line_item_form.text_field :food_id %><br/> <% end %> <% end %> <%= f.submit %> <% end %> This works great, except, when I look at the HTML, it's creating the inputs that look like the following, failing to input the correct id and instead placing the memory representation(?) of the model: <input type="text" value="2" size="30" name="meal[meal_line_item][#<MealLineItem:0x00000005c5d618>][servings]" id="meal_meal_line_item_#<MealLineItem:0x00000005c5d618>_servings">

    Read the article

  • How to get correct children ids using fields_for "parents[]", parent do |f| using f.fields_for :children, child ?

    - by Anatortoise House
    I'm editing multiple instances of a parent model in an index view in one form, as in Railscasts #198. Each parent has_many :children and accepts_nested_attributes_for :children, as in Railscasts #196 and #197 <%= form_tag %> <% for parent in @parents %> <%= fields_for "parents[]", parent do |f| <%= f.text_field :job %> <%= f.fields_for :children do |cf| %> <% cf.text_field :chore %> <% end %> <% end %> <% end %> <% end %> Given parent.id==1 f.text_field :job correctly generates <input id="parents_1_job" type="text" value="coding" size="30" name="parents[1][job]"> But cf.text_field :chore generates ids and names that don't have the parent index. id="parents_children_attributes_0_chore" name="parents[children_attributes][0][chore]" If I try passing the specific child object to f.fields_for like this: <% for child in parent.children %> <%= f.fields_for :children, child do |cf| %> <%= cf.text_field :chore %> <% end %> <% end %> I get the same. If I change the method from :children to "[]children" I get id="parents_1___children_chore" which gets the right parent_index but doesn't provide an array slot for the child index. "[]children[]" isn't right either: id="parents_1__children_3_chore" as I was expecting attributes_0_chore instead of 3_chore. Do I need to directly modify an attribute of the FormBuilder object, or subclass FormBuilder to make this work, or is there a syntax that fits this situation? Thanks for any thoughts.

    Read the article

  • JAVA : How to get the positions of all matches in a String?

    - by user692704
    I have a text document and a query (the query could be more than one word). I want to find the position of all occurrences of the query in the document. I thought of the documentText.indexOf(query) and using regular expression but I could not make it work. I end up with the following method: First, I have create a dataType called QueryOccurrence public class QueryOccurrence implements Serializable{ public QueryOccurrence(){} private int start; private int end; public QueryOccurrence(int nameStart,int nameEnd,String nameText){ start=nameStart; end=nameEnd; } public int getStart(){ return start; } public int getEnd(){ return end; } public void SetStart(int i){ start=i; } public void SetEnd(int i){ end=i; } } Then, I have used this datatype in the following method: public static List<QueryOccurrence>FindQueryPositions(String documentText, String query){ // Normalize do the following: lower case, trim, and remove punctuation String normalizedQuery = Normalize.Normalize(query); String normalizedDocument = Normalize.Normalize(documentText); String[] documentWords = normalizedDocument.split(" ");; String[] queryArray = normalizedQuery.split(" "); List<QueryOccurrence> foundQueries = new ArrayList(); QueryOccurrence foundQuery = new QueryOccurrence(); int index = 0; for (String word : documentWords) { if (word.equals(queryArray[0])){ foundQuery.SetStart(index); } if (word.equals(queryArray[queryArray.length-1])){ foundQuery.SetEnd(index); if((foundQuery.End()-foundQuery.Start())+1==queryArray.length){ //add the found query to the list foundQueries.add(foundQuery); //flush the foundQuery variable to use it again foundQuery= new QueryOccurrence(); } } index++; } return foundQueries; } This method return a list of all occurrence of the query in the document each one with its position. Could you suggest any easer and faster way to accomplish this task. Thanks

    Read the article

  • Why is my index view not working when I implement datepicker in my rails app

    - by user3736107
    Hi I am new to rails and would really appreciate some help, I am using the jQuery datepicker, the show view works however the index view doesn't, i get "undefined method `start_date' for nil:NilClass" in my index.html.erb file. Can you please let me know what is wrong with my index view and how i can fix it. The following are included in my app: meetups_controller.rb def show @meetup = Meetup.find(params[:id]) end def index @meetups = Meetup.where('user_id = ?', current_user.id).order('created_at DESC') end show.html.erb <h3>Title: <%= @meetup.title %></h3> <p>Start date: <%= @meetup.start_date.strftime("%B %e, %Y") %></p> <p>Start time: <%= @meetup.start_time.strftime("%l:%M %P") %></p> <p>End date: <%= @meetup.end_date.strftime("%B %e, %Y") %></p> <p>End time: <%= @meetup.end_time.strftime("%l:%M %P") %></p> index.html.erb <% if @meetups.any? %> <% @meetups.each do |meetup| %> <h3><%= link_to meetup.title, meetup_path(meetup) %></h3> <p>Start date: <%= meetup.start_date.strftime("%B %e, %Y") %></p> <p>Start time: <%= meetup.start_time.strftime("%l:%M %P") %></p> <p>End date: <%= @meetup.end_date.strftime("%B %e, %Y") %></p> <p>End time: <%= @meetup.end_time.strftime("%l:%M %P") %></p>

    Read the article

  • # how to split the column values using stored procedure

    - by user1444281
    I have two tables table 1 is SELECT * FROM dbo.TBL_WD_WEB_DECK WD_ID WD_TITLE ------------------ 1 2 and data in 2nd table is WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS ----------------------------------------------------- 1 1 1,2,3,4 Y I wrote the following stored procedure to insert the data into both the tables catching the identity of main table dbo.TBL_WD_WEB_DECK. WD_ID is related with WS_WEBPAGE_ID. I wrote the cursor in update action to split the WS_SPONSORS_ID column calling the split function in the cursor. But it is not working. The stored procedure is: ALTER procedure [dbo].[SP_Example_SPLIT] ( @action char(1), @wd_id int out, @wd_title varchar(50), @ws_webpage_id int, @ws_sponsors_id varchar(250), @ws_status char(1) ) as begin set nocount on IF (@action = 'A') BEGIN INSERT INTO dbo.TBL_WD_WEB_DECK(WD_TITLE)VALUES(@WD_TITLE) DECLARE @X INT SET @X = @@IDENTITY INSERT INTO dbo.TBL_WD_SPONSORS(WS_WEBPAGE_ID,WS_SPONSORS_ID,WS_STATUS) VALUES (@ws_webpage_id,@ws_sponsors_id,@ws_status) END ELSE IF (@action = 'U') BEGIN UPDATE dbo.TBL_WD_WEB_DECK SET WD_TITLE = @wd_title WHERE WD_ID = @WD_ID UPDATE dbo.TBL_WD_SPONSORS SET WS_STATUS = 'N' where WS_SPONSORS_ID not in (@ws_sponsors_id) and ws_webpage_id = @wd_id BEGIN /* Declaring Cursor to split the value in ws_sponsors_id column */ Declare @var int Declare splt cursor for /* used the split function and calling the parameter in that split function */ select * from iter_simple_intlist_to_tbl(@WS_SPONSORS_ID) OPEN splt FETCH NEXT FROM splt INTO @var WHILE (@@FETCH_STATUS = 0) begin if not exists(select * from dbo.TBL_WD_SPONSORS where WS_WEBPAGE_ID = @wd_id and WS_SPONSORS_ID = @var) begin insert into dbo.TBL_WD_SPONSORS (WS_WEBPAGE_ID,WS_SPONSORS_ID) values(@wd_id,@var) end end CLOSE SPONSOR DEALLOCATE SPONSOR END END END The result I want is if I insert the data in WD_ID and IN WS_SPONSORS_ID column the data in the WS_SPONSORS_ID column should split and I need to compare it with WD_ID. The result I need is: WD_ID WD_TITLE --------------------- 1 TEST 2 TEST1 3 WS_ID WS_WEBPAGE_ID WS_SPONSORS_ID WS_STATUS -------------------------------------------------------- 1 1 1 Y 2 1 2 N 3 1 3 Y If I pass the string in WS_SPONSORS_ID as 1,2,3 it has to split like the above. Can you help?

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Draw a never-ending line in XNA

    - by user2236165
    I am drawing a line in XNA which I want to never end. I also have a tool that moves forward in X-direction and a camera which is centered at this tool. However, when I reach the end of the viewport the lines are not drawn anymore. Here are some pictures to illustrate my problem: At the start the line goes across the whole screen, but as my tool moves forward, we reach the end of the line. Here are the method which draws the lines: private void DrawEvenlySpacedSprites (Texture2D texture, Vector2 point1, Vector2 point2, float increment) { var distance = Vector2.Distance (point1, point2); // the distance between two points var iterations = (int)(distance / increment); // how many sprites with be drawn var normalizedIncrement = 1.0f / iterations; // the Lerp method needs values between 0.0 and 1.0 var amount = 0.0f; if (iterations == 0) iterations = 1; for (int i = 0; i < iterations; i++) { var drawPoint = Vector2.Lerp (point1, point2, amount); spriteBatch.Draw (texture, drawPoint, Color.White); amount += normalizedIncrement; } } Here are the draw method in Game. The dots are my lines: protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear(Color.Black); nyVector = nextVector (gammelVector); GraphicsDevice.SetRenderTarget (renderTarget); spriteBatch.Begin (); DrawEvenlySpacedSprites (dot, gammelVector, nyVector, 0.9F); spriteBatch.End (); GraphicsDevice.SetRenderTarget (null); spriteBatch.Begin (SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, null, camera.transform); spriteBatch.Draw (renderTarget, new Vector2 (), Color.White); spriteBatch.Draw (tool, new Vector2(toolPos.X - (tool.Width/2), toolPos.Y - (tool.Height/2)), Color.White); spriteBatch.End (); gammelVector = new Vector2 (nyVector.X, nyVector.Y); base.Draw (gameTime); } Here's the next vector-method, It just finds me a new point where the line should be drawn with a new X-coordinate between 100 and 200 pixels and a random Y-coordinate between the old vector Y-coordinate and the height of the viewport: Vector2 nextVector (Vector2 vector) { return new Vector2 (vector.X + r.Next(100, 200), r.Next ((int)(vector.Y - 100), viewport.Height)); } Can anyone point me in the right direction here? I'm guessing it has to do with the viewport.width, but I'm not quite sure how to solve it. Thank you for reading!

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • Perl syntax error [closed]

    - by Linny
    I am a beginner taking a Perl programming course. We are trying to write a basic program for counting nucleotides in a DNA string. I'm getting syntax errors on the lines that have a single bracket on lines 28 & 70 and don't know why. It also reads that I have compilation errors. I have no idea where to start figuring that out. # The purpose of this program is to count the number of nucleotides in a strand. Each protein is counted separately # print "/n NOTE: Nucleotide counting /n"; # use strict; # enforce variable declarations use warnings; # enable compiler warnings # Display number of A,a,T,t,G,g,C,c, nucleotides in a word or sequence of letters. # my ($base) = ''; # an extracted letter from a string my ($nuceotide_count) = 0 ; # the current position within the word my ($position) = 0 ; # number of vowels in user-supplied word my ($word) = ''; # word to be processed my ($A_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($a_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($C_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($c_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($G_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($g_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($T_count) = 0 ; # of T nucleotides in the user-supplied sequence my ($t_count) = 0 ; # of T nucleotides in the user-supplied sequence word = (STDIN) for ($position = 0);($position if (($base eq 'a') or ($base eq 'A')) { ++$A_count; } # end if ++$position; if (($base eq 'T') or ($base eq 't')) { ++$T_count; } end if ++$position; if (($base eq 'G') or ($base eq 'g')) { ++$G_count; } # end if ++$position; if (($base eq 'C') or ($base eq 'c')) { ++$C_count; } # end if ++$position; } # end for # Display final results. # print " \n The number of A or a neucleotides is: $A_count"; print " \n The number of T or t neucleotides is: $T_count"; print " \n The number of G or g neucleotides is: $G_count"; print " \n The number of C or c neucleotides is: $C_count"; print " \n\n Program completed successfully. \n" ; exit ;

    Read the article

  • Power Distribution amongst connected nodes

    - by Perky
    In my game the map is represented by connected nodes, each node has a number of connected nodes. The nodes represent a system in which players can build structures and move units about. If you're familiar with Sins of a Solar Empire the game map is very similar. I want each node to be able to produce power and share it with all connected nodes. For example if A, B, C & D are all connected and produce 100 power units, then each system should have 400 power units available. If node B builds a structure that consumes 100 power units then A, B, C & D should then have 300 power units available. I've been working on this system all day and haven't been able to get it working quite the way I want. My current implementation is to first recurse through each nodes's connected node adding up the power, I keep a list of closed nodes so it doesn't loop, it's quite similar to A* actually. Pseudo code: All nodes start with the properties node.power = 0 node.basePower = 100 // could be different for each node. node.initialPower = node.basePower - function propagatePower( node, initialPower, closedNodes ) node.power += initialPower add( closedNodes, node ) connectedNodes = connected_nodes_except_from( closedNodes ) foreach node in connectedNodes do propagatePower( node, initialPower, closedNodes ) end end After this I iterate through all power consumers. foreach consumer in consumers do node = consumer.parentNode if node.power >= consumer.powerConsumption then consumer.powerConsumed += consumer.powerConsumption node.producedPower -= consumer.powerConsumption end end Then I adjust the initial power for the next propagation cycle. foreach node in nodes do node.initialPower = node.basePower - node.producedPower node.displayPower = node.power // for rendering the power. node.power = 0 end This seemed to work at first but then I came into a problem. Say two nodes A & B produce 100Pu each, it's shared so both A & B have 200Pu. I then make two structures that consume 80Pu each on A (160Pu). Then the nodes power is adjusted to basePower - producedPower (100-160 = -60). Nodes are propagated, both nodes now have 40Pu (A: -60 + B: 100 = 40). Which is correct because they started with 200Pu - 160Pu = 40Pu. However now node.power >= consumer.powerConsumption is false. Whats worse is it's false for any structure that uses more that 40Pu, so the whole system goes down. I could deduct from consumer.powerConsumption but what do I do if power is reduced elsewhere? I don't have the correct data to perform the necessary checks. It's late so I'm probably not thinking straight but I thought to ask on here to see if anyone has any other implementations, better or worse I'd be interested to know.

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • Plans for Java 7 and E-Business Suite Certification

    - by Steven Chan (Oracle Development)
    As of June 2012, Java 7 has not been certified yet with Oracle E-Business Suite.  EBS customers should continue to run JRE 6 on their Windows end-user desktops, and JDK 6 on their EBS servers. If a search engine has brought you to this article, please check the Certifications summary for our latest certified Java release. Our plans for certifying Java 7 for the E-Business Suite We plan on releasing the Java 7 certification for E-Business Suite customers in two phases: Phase 1: Certify JRE 7 for Windows end-user desktops Phase 2: Certify JDK 7 for server-based components When will Java 7 be certified with EBS? We're working on the first phase now. As usual, I cannot discuss release dates here, but you can monitor or subscribe to this blog for updates. Current known issues with JRE 7 in EBS environments Our current testing shows that there are known incompatibilities between JRE 7 and the Forms-invocation process in EBS environments.  We have been working directly with the Java division on this for a while now.  In the meantime, EBS customers should not deploy JRE 7 to their end-user Windows desktop clients. You should stick with JRE 1.6 for now.  But wait, you previously said... Older JRE certification announcements stated: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and higher.  We test all new JRE releases in parallel with the JRE development process, so all JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE releases to your EBS users' desktops. Yes, this is true.  This standard boilerplate text was written before JRE 7 was released, so there was no possibility of misunderstanding.  With the availability of JRE 7, that boilerplate needs to be revised to read: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Inherit one instance variable from the global scope

    - by Julian
    I'm using Curses to create a command line GUI with Ruby. Everything's going well, but I have hit a slight snag. I don't think Curses knowledge (esoteric to be fair) is required to answer this question, just Ruby concepts such as objects and inheritance. I'm going to explain my problem now, but if I'm banging on, just look at the example below. Basically, every Window instance needs to have .close called on it in order to close it. Some Window instances have other Windows associated with it. When closing a Window instance, I want to be able to close all of the other Window instances associated with it at the same time. Because associated Windows are generated in a logical fashion, (I append the name with a number: instance_variable_set(self + integer, Window.new(10,10,10,10)) ), it's easy to target generated windows, because methods can anticipate what assosiated windows will be called, (I can recreate the instance variable name from scratch, and almost query it: instance_variable_get(self + integer). I have a delete method that handles this. If the delete method is just a normal, global method (called like this: delete_window(@win543) then everything works perfectly. However, if the delete method is an instance method, which it needs to be in-order to use the self keyword, it doesn't work for a very clear reason; it can 'query' the correct instance variable perfectly well (instance_variable_get(self + integer)), however, because it's an instance method, the global instances aren't scoped to it! Now, one way around this would obviously be to simply make a global method like this: delete_window(@win543). But I have attributes associated with my window instances, and it all works very elegantly. This is very simplified, but it literally translates the problem exactly: class Dog def speak woof end end def woof if @dog_generic == nil puts "@dog_generic isn't scoped when .woof is called from a class method!\n" else puts "@dog_generic is scoped when .woof is called from the global scope. See:\n" + @dog_generic end end @dog_generic = "Woof!" lassie = Dog.new lassie.speak #=> @dog_generic isn't scoped when .woof is called from an instance method!\n woof #=> @dog_generic is scoped when .woof is called from the global scope. See:\nWoof! TL/DR: I need lassie.speak to return this string: "@dog_generic is scoped when .woof is called from the global scope. See:\nWoof!" @dog_generic must remain as an insance variable. The use of Globals or Constants is not acceptable. Could woof inherit from the Global scope? Maybe some sort of keyword: def woof < global # This 'code' is just to conceptualise what I want to do, don't take offence! end Is there some way the .woof method could 'pull in' @dog_generic from the global scope? Will @dog_generic have to be passed in as a parameter?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >