Search Results

Search found 3646 results on 146 pages for 'osb validate xquery xpath'.

Page 137/146 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Play! Framework - Can my view template be localised when rendering it as an AsyncResult?

    - by avik
    I've recently started using the Play! framework (v2.0.4) for writing a Java web application. In the majority of my controllers I'm following the paradigm of suspending the HTTP request until the promise of a web service response has been fulfilled. Once the promise has been fulfilled, I return an AsyncResult. This is what most of my actions look like (with a bunch of code omitted): public static Result myActionMethod() { Promise<MyWSResponse> wsResponse; // Perform a web service call that will return the promise of a MyWSResponse... return async(wsResponse.map(new Function<MyWSResponse, Result>() { @Override public Result apply(MyWSResponse response) { // Validate response... return ok(myScalaViewTemplate.render(response.data())); } })); } I'm now trying to internationalise my app, but hit the following error when I try to render a template from an async method: [error] play - Waiting for a promise, but got an error: There is no HTTP Context available from here. java.lang.RuntimeException: There is no HTTP Context available from here. at play.mvc.Http$Context.current(Http.java:27) ~[play_2.9.1.jar:2.0.4] at play.mvc.Http$Context$Implicit.lang(Http.java:124) ~[play_2.9.1.jar:2.0.4] at play.i18n.Messages.get(Messages.java:38) ~[play_2.9.1.jar:2.0.4] at views.html.myScalaViewTemplate$.apply(myScalaViewTemplate.template.scala:40) ~[classes/:na] at views.html.myScalaViewTemplate$.render(myScalaViewTemplate.template.scala:87) ~[classes/:na] at views.html.myScalaViewTemplate.render(myScalaViewTemplate.template.scala) ~[classes/:na] In short, where I've got a message bundle lookup in my view template, some Play! code is attempting to access the original HTTP request and retrieve the accept-languages header, in order to know which message bundle to use. But it seems that the HTTP request is inaccessible from the async method. I can see a couple of (unsatisfactory) ways to work around this: Go back to the 'one thread per request' paradigm and have threads block waiting for responses. Figure out which language to use at Controller level, and feed that choice into my template. I also suspect this might not be an issue on trunk. I know that there is a similar issue in 2.0.4 with regards to not being able to access or modify the Session object which has recently been fixed. However I'm stuck on 2.0.4 for the time being, so is there a better way that I can resolve this problem?

    Read the article

  • VBScript Can not Select XML nodes

    - by urbanMethod
    I am trying to Select nodes from some webservice response XML to no avail. For some reason I am able to select the root node ("xmldata") however, when I try to drill deeper("xmldata/customers") everything is returned empty! Below is the a sample of the XML that is returned by the webservice. <xmldata> <customers> <customerid>22506</customerid> <firstname>Jim</firstname> <issuperadmin>N</issuperadmin> <lastname>Jones</lastname> </customers> </xmldata> and here is the code I am trying to select customerid, firstname, and lastname; ' Send the Xml oXMLHttp.send Xml_to_Send ' Validate the Xml dim xmlDoc set xmlDoc = Server.CreateObject("Msxml2.DOMDocument") xmlDoc.load (oXMLHttp.ResponseXML.text) if(len(xmlDoc.text) = 0) then Xml_Returned = "<B>ERROR in Response xml:<BR>ERROR DETAILS:</B><BR><HR><BR>" end if dim nodeList Set nodeList = xmlDoc.SelectNodes("xmldata/customers") For Each itemAttrib In nodeList dim custID, custLname, custFname custID =itemAttrib.selectSingleNode("customerid").text custLname =itemAttrib.selectSingleNode("lastname").text custFname =itemAttrib.selectSingleNode("firstname").text response.write("News Subject: " & custID) response.write("<br />News Subject: " & custLname) response.write("<br />News Date: " & custFname) Next The result of the code above is zilch! nothing is written to the page. One strange thing is if I select the root element and get its length as follows; Set nodeList = xmlDoc.SelectNodes("xmldata") Response.Write(nodeList.length) '1 is written to page It correctly determines the length of 1. However when I try the same with the next node down as follows; Set nodeList2 = xmlDoc.SelectNodes("xmldata/customers") Response.Write(nodeList.length) '0 is written to page It returns a length of 0. WHY! Please note that this isn't the only way I have attempted to access the values of these nodes. I just can not work out what I am doing wrong. Could someone please help me out. Cheers.

    Read the article

  • javascript multidimensional array?

    - by megapool020
    Hi there, I hope I can make myself clear in English and in what I want to create. I first start with what I want. I want to make a IBANcalculator that can generate 1-n IBANnumbers and also validate a given IBANnumber. IBANnumbers are used in many countries for payment and the tool I want to make can be used to generate the numbers for testing purpose. On wikipedia (Dutch site) I found a list with countries and their way of defining the IBANnumber. What I want to do it to make a kind of an array that holds all the countries with their name, the code, there IBANlength, the bankingformat and the account format. The array needs to be used to: Generate a select list (for selecting a country) used to check part for generating numbers used to check part for validating number I don't know if an array is the best way, but that's so far the most knowledge I have. I allready made a table like this that holds the info (this table isn't used anyware, but it was a good way for me to show you what I have in mind about how the structure is): <table> <tr> <td>countryname</td> <td>country code</td> <td>valid IBAN length</td> <td>Bank/Branch Code (check1, bank, branch)</td> <td>Account Number (check2, number, check3)</td> <tr> <tr> <td>Andorra</td> <td>AD</td> <td>24</td> <td>0 4n 4n</td> <td>0 12 0 </td> <tr> <tr> <td>België</td> <td>BE</td> <td>16</td> <td>0 3n 0 </td> <td>0 7n 2n</td> <tr> <tr> <td>Bosnië-Herzegovina</td> <td>BA</td> <td>20</td> <td>0 3n 3n</td> <td>0 8n 2n</td> <tr> </table> and many more ;-) I hope someone can help me with this. Tnx in advance

    Read the article

  • validation functions in google apps script is not working properly

    - by chocka
    I create a i/p form in google site using Apps Script and i did the validation using the Apps Script coding. Validation functions available in Apps script is not satisfying all the possibility of checking the error. function validate(e) { var app = UiApp.getActiveApplication(); var flag=0; var text = app.getElementById('name'); var textrequired = app.getElementById('namerequired'); var number = app.getElementById('number'); var numberrequired = app.getElementById('numberrequired'); var email = app.getElementById('email'); var emailrequired = app.getElementById('emailrequired'); var submit = app.getElementById('submit_button'); var valid = app.createClientHandler() .validateNumber(number) .validateNotInteger(text) .validateEmail(email) .forTargets(submit).setEnabled(true) .forTargets(number,text,email).setStyleAttribute("color","black") .forTargets(numberrequired,textrequired,emailrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidno = app.createClientHandler().validateNotNumber(number).validateMatches(number, '').forTargets(number).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(numberrequired).setText('Please Enter a Valid No.').setStyleAttribute("color", "red").setVisible(true); var validno = app.createClientHandler().validateNumber(number).forTargets(number).setStyleAttribute("color","black").forTargets(numberrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidText=app.createClientHandler().validateNumber(text).validateMatches(text, '').forTargets(text).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(textrequired).setText('Please Enter a Valid Name.').setStyleAttribute("color", "red").setVisible(true); var validText=app.createClientHandler().validateNotNumber(text).forTargets(text).setStyleAttribute("color","black").forTargets(textrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidemail=app.createClientHandler().validateNotEmail(email).validateMatches(email, '').forTargets(email).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(emailrequired).setText('Please Enter a Valid Mail-Id.').setStyleAttribute("color", "red").setVisible(true); var validemail=app.createClientHandler().validateEmail(email).forTargets(email).setStyleAttribute("color","black").forTargets(emailrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); number.addKeyPressHandler(invalidno).addKeyPressHandler(validno).addKeyPressHandler(valid).addKeyPressHandler(invalidText).addKeyPressHandler(invalidemail); text.addKeyPressHandler(invalidText).addKeyPressHandler(validText).addKeyPressHandler(valid).addKeyPressHandler(invalidno).addKeyPressHandler(invalidemail); email.addKeyPressHandler(invalidemail).addKeyPressHandler(validemail).addKeyPressHandler(valid).addKeyPressHandler(invalidno).addKeyPressHandler(invalidText); if (text == ''){flag = 1;} if (email == ''){flag = 1;} if (number == ''){flag = 1;} if(flag == 1){submit.setEnabled(false);} return app; } I just placed my Validation function using Apps Script. I don't know why its not satisfying all the possibilities of the validation. And also i have to do is to enable the submit button after all the fields satisfy the validation. After once it enabled, if i make any error in any field it will not get disable correctly. I wrote the coding correctly i think so. Please take a look at my validation function and give me some suggestion to make it possible. Please guide me, Thanks & Regards, chocka.

    Read the article

  • Boxy Submit Form

    - by jornbjorndalen
    I am using the boxy jQuery plugin in my page to display a form on a clickEvent for the fullCalendar plugin. It is working all right , the only problem I have is that the form in boxy brings up the confirmation dialog the first time the dialog is opened and when the user clicks "Ok" it submits the form a second time which generates 2 events on my calendar and 2 entries in my database. My code looks like this inside fullCalendar: dayClick: function(date, allDay, jsEvent, view) { var day=""+date.getDate(); if(day.length==1){ day="0"+day; } var year=date.getFullYear(); var month=date.getMonth()+1; var month=""+month; if(month.length==1){ month="0"+month; } var defaultdate=""+year+"-"+month+"-"+day+" 00:00:00"; var ele = document.getElementById("myform"); new Boxy(ele,{title: "Add Task", modal: true}); document.getElementById("title").value=""; document.getElementById("description").value=""; document.getElementById("startdate").value=""+defaultdate; document.getElementById("enddate").value=""+defaultdate; } I also use validators on the forms fields: $.validator.addMethod( "datetime", function(value, element) { // put your own logic here, this is just a (crappy) example return value.match(/^([0-9]{4})-([0-1][0-9])-([0-3][0-9])\s([0-1][0-9]|[2][0-3]):([0-5][0-9]):([0-5][0-9])$/); }, "Please enter a date in the format YYYY-mm-dd hh:MM:ss" ); var validator=$("#myform").validate({ onsubmit:true, rules: { title: { required: true }, startdate: { required: true, datetime: true }, enddate: { required: true, datetime: true } }, submitHandler: function(form) { //this function renders a new event and makes a call to a php script that inserts it into the db addTask(form); } }); And the form looks like this: <form id ='myform'> <table border='1' width='100%'> <tr><td align='right'>Title:</td><td align='left'><input id='title' name='title' size='30'/></td></tr> <tr><td align='right'>Description:</td><td align='left'><textarea id='description' name='description' rows='4' cols='30'></textarea></td></tr> <tr><td align='right'>Start Date:</td><td align='left'><input id='startdate' name='startdate' size='30'/></td></tr> <tr><td align='right'>End Date:</td><td align='left'><input id='enddate' name='enddate' size='30' /></td></tr> <tr><td colspan='2' align='right'><input type='submit' value='Add' /></td></tr> </table> </form>

    Read the article

  • Using PHP5s SOAP Client to send data to an ASP/.NET based SOAP server.

    - by user325143
    I am trying to write a snippet of PHP to connect to a third party's API via SOAP to enter some data into their database. The API requires me to pass several mandatory fields for every call (username, password, companyid, entitytype) in addition to the mandatory data fields. It also requires me to call the "ValidateEntity" funciton before calling the "CreateEntity" function. Documentation can be found here: http://wiki.agemni.com/Getting_Started/APIs/Database_API I have never worked with SOAP before, so I am very new to this. Here is what I have so far: error_reporting(E_ALL); ini_set('display_errors', '1'); $client = new SoapClient("http://agemni.com/AgemniWebservices/service1.asmx?WSDL", array('trace'=> true)); $options = array( 'username' => "myuser", 'password' => "mypassword", 'companyid' => myID, 'entitytype' => 2 ); $params = array( 'fname' => "John", 'lname' => "Doe", 'phone' => "859-333-3333", 'zip' => "40332", 'area id' => "12345", 'lead id' => "28222", 'contactdate' => "4/10/2010" ); $validate = $client->__soapCall("ValidateEntity", array($params), array($options)); $client->__soapCall("CreateEntity", array($params), array($options)); echo "<pre>"; var_dump($client-> __getLastRequestHeaders()); var_dump($client-> __getLastRequest()); var_dump($client-> __getLastResponseHeaders()); var_dump($client-> __getLastResponse()); var_dump($result); echo "</pre>"; Upon executing this code, I get the following error: Fatal error: Uncaught SoapFault exception: [Client] SOAP-ERROR: Encoding: object hasn't 'objecttype' property in /www/tmp/index-soap.php:24 Stack trace: #0 /www/tmp/index-soap.php(24): SoapClient->__soapCall('ValidateEntity', Array) #1 {main} thrown in /www/stealth/tmp/index-soap.php on line 24 I guess my question is.. am I even going about doing this the right way? I know this is a very broad question, but I appreciate any advice you can give me about making this work. Please let me know if you require more detail. Thanks!

    Read the article

  • When an active_record is saved, is it saved before or after its associated object(s)?

    - by SeeBees
    In rails, when saving an active_record object, its associated objects will be saved as well. But has_one and has_many association have different order in saving objects. I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid I figured out that when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second one. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why do has_one and has_many association have different order in saving? Any ideas? And is there any way I can change the order? Say I want to have the same saving order for both has_one and has_many. Thanks.

    Read the article

  • Rspec and Rails 3 - Problem Validating Nested Attribute Collection Size

    - by MunkiPhD
    When I create my Rspec tests, I keep getting a validation of false as opposed to true for the following tests. I've tried everything and the following is the measly code that I have now - so if it's waaaaay wrong, that's why. class Master < ActiveRecord::Base attr_accessible :name, :specific_size # Associations ---------------------- has_many :line_items accepts_nested_attributes_for :line_items, :allow_destroy => true, :reject_if => lambda { |a| a[:item_id].blank? } # Validations ----------------------- validates :name, :presence => true, :length => {:minimum => 3, :maximum => 30} validates :specific_size, :presence => true, :length => {:minimum => 4, :maximum => 30} validate :verify_items_count def verify_items_count if self.line_items.size < 2 errors.add(:base, "Not enough items to create a master") end end end And here it the items model: class LineItem < ActiveRecord::Base attr_accessible :specific_size, :other_item_type_id # Validations -------------------- validates :other_item_type_id, :presence => true validates :master_id, :presence => true validates :specific_size, :presence => true # Associations --------------------- belongs_to :other_item_type belongs_to :master end The RSpec Tests: before(:each) do @master_lines = [] @master_lines << LineItem.new(:other_item_type_id => 1, :master_id => 2, :specific_size => 1) @master_lines << LineItem.new(:other_item_type_id => 2, :master_id => 2, :specific_size => 1) @attr = {:name => "Some Master", :specific_size => "1 giga"} end it "should create a new instance given a valid name and specific size" do @master = Master.create(@attr) line_item_one = @master.line_items.build(:other_item_type_id => 1, :specific_size => 1) line_item_two = @master.line_items.build(:other_item_type_id => 2, :specific_size => 2) @master.line_items.size === 2 @master.should be_valid end it "should have at least two items to be valid" do master = Master.new(:name => "test name", :specific_size => "1 mega") master_item_one = LineItem.new(:other_item_type_id => 1, :specific_size => 2) master_item_two = LineItem.new(:other_item_type_id => 2, :specific_size => 1) master.line_items << master_item_one master.should_not be_valid master.line_items << master_item_two master.line_items.size.should === 2 master.should be_valid end I'm very new to Rspec and Rails - and I've been failing at this for the past couple of hours. Thanks for any help in advance.

    Read the article

  • Getting jQuery to recognise .change() in IE

    - by Philip Morton
    I'm using jQuery to hide and show elements when a radio button group is altered/clicked. It works fine in browsers like Firefox, but in IE 6 and 7, the action only occurs when the user then clicks somewhere else on the page. To elaborate, when you load the page, everything looks fine. In Firefox, if you click a radio button, one table row is hidden and the other one is shown immediately. However, in IE 6 and 7, you click the radio button and nothing will happen until you click somewhere on the page. Only then does IE redraw the page, hiding and showing the relevant elements. Here's the jQuery I'm using: $(document).ready(function(){ $(".hiddenOnLoad").hide(); $("#viewByOrg").change(function () { $(".visibleOnLoad").show(); $(".hiddenOnLoad").hide(); }); $("#viewByProduct").change(function () { $(".visibleOnLoad").hide(); $(".hiddenOnLoad").show(); }); }); Here's the part of the XHTML that it affects. Apologies if it's not very clean, but the whole page does validate as XHTML 1.0 Strict: <tr> <td>View by:</td> <td> <p> <input type="radio" name="viewBy" id="viewByOrg" value="organisation" checked="checked"/> Organisation </p> <p> <input type="radio" name="viewBy" id="viewByProduct" value="product"/> Product </p> </td> </tr> <tr class="visibleOnLoad"> <td>Organisation:</td> <td> <select name="organisation" id="organisation" multiple="multiple" size="10"> <option value="1">Option 1</option> <option value="2">Option 2</option> </select> </td> </tr> <tr class="hiddenOnLoad"> <td>Product:</td> <td> <select name="product" id="product" multiple="multiple" size="10"> <option value="1">Option 1</option> <option value="2">Option 2</option> </select> </td> </tr> If anyone has any ideas why this is happening and how to fix it, they would be very much appreciated!

    Read the article

  • Authlogic: passwords saved in the DB are not working as expected.

    - by user570459
    Hello everyone, Im having trouble with authlogic on my production server. Im able to update passwords in the database but when i try to validate a user using the new password, the validation fails. Please check the below console output. Notice how the salt and crypted_password fields get update before and after the new password is saved. The issue is only on my production server (running passenger). Everything works fine on my development machine. => #<User id: 3, login: "saravk", email: "[email protected]", crypted_password: "9bc86247105e940bb748ab680c0e77d9c44a82ea", salt: "WdVpQIdwl68k8lJWOU"> irb(main):003:0> u.password = "kettik123" => "kettik123" irb(main):004:0> u.password_confirmation = "kettik123" => "kettik123" irb(main):005:0> u.save! => true irb(main):006:0> u.valid_password?("kettik123") => true irb(main):007:0> u.reload => #<User id: 3, login: "saravk", email: "[email protected]", crypted_password: "f059007c56f498a12c63209c849c1e65bb151174", salt: "lVmmczhyGE0gxsbV421A"> irb(main):008:0> u.valid_password?("kettik123") => false The authlogic configuration in my User model.. class User < ActiveRecord::Base acts_as_authentic do |c| c.login_field :email c.validate_login_field false c.validate_email_field false c.perishable_token_valid_for = 1.day c.disable_perishable_token_maintenance = true end I use the email field as the main key for the user. Also the email field is allowed to be blank in some cases (eg a facebook user) Also i belive that my schema is proper (in terms of the length of the salt & crypted password fields) create_table "users", :force => true do |t| t.string "login" t.string "email" t.string "crypted_password", :limit => 128, :default => "" t.string "salt", Any help on this would be highly appreciated. Thanks.

    Read the article

  • How do I defer execution of some Ruby code until later and run it on demand in this scenario?

    - by Kyle Kaitan
    I've got some code that looks like the following. First, there's a simple Parser class for parsing command-line arguments with options. class Parser def initialize(&b); ...; end # Create new parser. def parse(args = ARGV); ...; end # Consume command-line args. def opt(...); ...; end # Declare supported option. def die(...); ...; end # Validation handler. end Then I have my own Parsers module which holds some metadata about parsers that I want to track. module Parsers ParserMap = {} def self.make_parser(kind, desc, &b) b ||= lambda {} module_eval { ParserMap[kind] = {:desc => "", :validation => lambda {} } ParserMap[kind][:desc] = desc # Create new parser identified by `<Kind>Parser`. Making a Parser is very # expensive, so we defer its creation until it's actually needed later # by wrapping it in a lambda and calling it when we actually need it. const_set(name_for_parser(kind), lambda { Parser.new(&b) }) } end # ... end Now when you want to add a new parser, you can call make_parser like so: make_parser :db, "login to database" do # Options that this parser knows how to parse. opt :verbose, "be verbose with output messages" opt :uid, "user id" opt :pwd, "password" end Cool. But there's a problem. We want to optionally associate validation with each parser, so that we can write something like: validation = lambda { |parser, opts| parser.die unless opts[:uid] && opts[:pwd] # Must provide login. } The interface contract with Parser says that we can't do any validation until after Parser#parse has been called. So, we want to do the following: Associate an optional block with every Parser we make with make_parser. We also want to be able to run this block, ideally as a new method called Parser#validate. But any on-demand method is equally suitable. How do we do that?

    Read the article

  • Extending struts 2 "property" data tag

    - by John B.
    Hi, We are currently developing a project with Struts2. We have a module on which we display a large amount of data on read-only fields from a group of beans via the "property" Struts 2 data tag (i.e. <s:property value="aBeanProperty" /) on a jsp file. In some cases most of the fields might come empty, so a blank is being displayed on screen. Our customer is now requesting us to display default string (i.e. "N/A") whenever a property comes empty, so that it is displayed in place of the blank spaces currently shown. We are looking for a way to achieve this in a clean and maintainable way. The 'property' tag comes with a 'default' attribute on which one can define a default value in cases when the accessed property comes as null. However, most of our properties are empty strings, therefore it does not work in our case. Another solution we are thinking of is to define a base class for all of our beans, and define a util method which will validate if a string is null or empty and then return the default value. Then we would call this method from each bean getter. And yes this would be tiresome and kind of ugly :), therefore we are holding out on this one in case of a better solution. Now, we have in mind a solution which we think would be the best but have not had luck on how implement it. We are planning on extending the 'property' tag some way, defining a new 'default' attribute so that besides working on null properties, it also do so on empty strings ("", " ", etc). Therefore we would only need to replace the original s:property tag with our new custom tag, and the desired result would be achieved without touching java code. Do you have an idea on how to do this? Also, any other clever solution (maybe some sort of design pattern?) on how to default the values of a large amount of property beans are welcome too! (Or maybe, even there might be some tag that does this already in Struts2??) Thanks in advance.

    Read the article

  • Customs toString in Java not giving desired output and throwing error

    - by user2972048
    I am writing a program in Java to accept and validate dates according to the Gregorian Calendar. My public boolean setDate(String aDate) function for an incorrect entry is suppose to change the boolean goodDate variable to false. That variable is suppose tell the toString function, when called, to output "Invalid Entry" but it does not. My public boolean setDate(int d, int m, int y) function works fine though. I've only included the problem parts as its a long piece of code. Thanks public boolean setDate(int day, int month, int year){ // If 1 <= day <= 31, 1 <= month <= 12, and 0 <= year <= 9999 & the day match with the month // then set object to this date and return true // Otherwise,return false (and do nothing) boolean correct = isTrueDate(day, month, year); if(correct){ this.day = day; this.month = month; this.year = year; return true; }else{ goodDate = false; return false; } //return false; } public boolean setDate(String aDate){ // If aDate is of the form "dd/mm/yyyy" or "d/mm/yyyy" // Then set the object to this date and return true. // Otherwise, return false (and do nothing) Date d = new Date(aDate); boolean correct = isTrueDate(d.day, d.month, d.year); if(correct){ this.day = d.day; this.month = d.month; this.year = d.year; return true; }else{ goodDate = false; return false; } } public String toString(){ // outputs a String of the form "dd/mm/yyyy" // where dd must be 2 digits (with leading zero if needed) // mm must be 2 digits (with leading zero if needed) // yyyy must be 4 digits (with leading zeros if needed) String day1; String month1; String year1; if(day<10){ day1 = "0" + Integer.toString(this.day); } else{ day1 = Integer.toString(this.day); } if(month<10){ month1 = "0" + Integer.toString(this.month); } else{ month1 = Integer.toString(this.month); } if(year<10){ year1 = "00" + Integer.toString(this.year); } else{ year1 = Integer.toString(this.year); } if(goodDate){ return day1 +"/" +month1 +"/" + year1; }else{ goodDate = true; return "Invalid Entry"; } } Thank you

    Read the article

  • Registration form validation not validating

    - by jgray
    I am a noob when it comes to web development. I am trying to validate a registration form and to me it looks right but it will not validate.. This is what i have so far and i am validating through a repository or database. Any help would be greatly appreciated. thanks <?php session_start(); $title = "User Registration"; $keywords = "Name, contact, phone, e-mail, registration"; $description = "user registration becoming a member."; require "partials/_html_header.php"; //require "partials/_header.php"; require "partials/_menu.php"; require "DataRepository.php"; // if all validation passed save user $db = new DataRepository(); // form validation goes here $first_nameErr = $emailErr = $passwordErr = $passwordConfirmErr = ""; $first_name = $last_name = $email = $password = $passwordConfirm = ""; if(isset($_POST['submit'])) { $valid = TRUE; // check if all fields are valid { if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["first_name"])) {$first_nameErr = "Name is required";} else { // $first_name = test_input($_POST["first_name"]); // check if name only contains letters and whitespace if (!preg_match("/^[a-zA-Z ]*$/",$first_name)) { $first_nameErr = "Only letters and white space allowed"; } } if (empty($_POST["email"])) {$emailErr = "Email is required";} else { // $email = test_input($_POST["email"]); // check if e-mail address syntax is valid if (!preg_match("/([\w\-]+\@[\w\-]+\.[\w\-]+)/",$email)) { $emailErr = "Invalid email format"; } } if (!preg_match("/(......)/",$password)) { $passwordErr = "Subject must contain THREE or more characters!"; } if ($_POST['password']!= $_POST['passwordConfirm']) { echo("Oops! Password did not match! Try again. "); } function test_input($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; } } } if(!$db->isEmailUnique($_POST['email'])) { $valid = FALSE; //display errors in the correct places } // if still valid save the user if($valid) { $new_user = array( 'first_name' => $_POST['first_name'], 'last_name' => $_POST['last_name'], 'email' => $_POST['email'], 'password' => $_POST['password'] ); $results = $db->saveUser($new_user); if($results == TRUE) { header("Location: login.php"); } else { echo "WTF!"; exit; } } } ?> <head> <style> .error {color: #FF0000;} </style> </head> <h1 class="center"> World Wide Web Creations' User Registration </h1> <p><span class="error"></span><p> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>" onsubmit="return validate_form()" > First Name: <input type="text" name="first_name" id="first_name" value="<?php echo $first_name;?>" /> <span class="error"> <?php echo $first_nameErr;?></span> <br /> <br /> Last Name(Optional): <input type="text" name="last_name" id="last_name" value="<?php echo $last_name;?>" /> <br /> <br /> E-mail: <input type="email" name="email" id="email" value="<?php echo $email;?>" /> <span class="error"> <?php echo $emailErr;?></span> <br /> <br /> Password: <input type="password" name="password" id="password" value="" /> <span class="error"> <?php echo $passwordErr;?></span> <br /> <br /> Confirmation Password: <input type="password" name="passwordConfirm" id="passwordConfirm" value="" /> <span class="error"> <?php echo $passwordConfirmErr;?></span> <br /> <br /> <br /> <br /> <input type="submit" name="submit" id="submit" value="Submit Data" /> <input type="reset" name="reset" id="reset" value="Reset Form" /> </form> </body> </html> <?php require "partials/_footer.php"; require "partials/_html_footer.php"; ?> class DataRepository { // version number private $version = "1.0.3"; // turn on and off debugging private static $debug = FALSE; // flag to (re)initialize db on each call private static $initialize_db = FALSE; // insert test data on initialization private static $load_default_data = TRUE; const DATAFILE = "203data.txt"; private $data = NULL; private $errors = array(); private $user_fields = array( 'id' => array('required' => 0), 'created_at' => array('required' => 0), 'updated_at' => array('required' => 0), 'first_name' => array('required' => 1), 'last_name' => array('required' => 0), 'email' => array('required' => 1), 'password' => array('required' => 1), 'level' => array('required' => 0, 'default' => 2), ); private $post_fields = array( 'id' => array('required' => 0), 'created_at' => array('required' => 0), 'updated_at' => array('required' => 0), 'user_id' => array('required' => 1), 'title' => array('required' => 1), 'message' => array('required' => 1), 'private' => array('required' => 0, 'default' => 0), ); private $default_user = array( 'id' => 1, 'created_at' => '2013-01-01 00:00:00', 'updated_at' => '2013-01-01 00:00:00', 'first_name' => 'Admin Joe', 'last_name' => 'Tester', 'email' => '[email protected]', 'password' => 'a94a8fe5ccb19ba61c4c0873d391e987982fbbd3', 'level' => 1, ); private $default_post = array( 'id' => 1, 'created_at' => '2013-01-01 00:00:00', 'updated_at' => '2013-01-01 00:00:00', 'user_id' => 1, 'title' => 'My First Post', 'message' => 'This is the message of the first post.', 'private' => 0, ); // constructor will load existing data into memory // if it does not exist it will create it and initialize if desired public function __construct() { // check if need to reset if(DataRepository::$initialize_db AND file_exists(DataRepository::DATAFILE)) { unlink(DataRepository::DATAFILE); } // if file doesn't exist, create the initial datafile if(!file_exists(DataRepository::DATAFILE)) { $this->log("Data file does not exist. Attempting to create it... (".__FUNCTION__.":".__LINE__.")"); // create initial file $this->data = array( 'users' => array( ), 'posts' => array() ); // load default data if needed if(DataRepository::$load_default_data) { $this->data['users'][1] = $this->default_user; $this->data['posts'][1] = $this->default_post; } $this->writeTheData(); } // load the data into memory for use $this->loadTheData(); } private function showErrors($break = TRUE, $type = NULL) { if(count($this->errors) > 0) { echo "<div style=\"color:red;font-weight: bold;font-size: 1.3em\":<h3>$type Errors</h3><ol>"; foreach($this->errors AS $error) { echo "<li>$error</li>"; } echo "</ol></div>"; if($break) { "</br></br></br>Exiting because of errors!"; exit; } } } private function writeTheData() { $this->log("Attempting to write the datafile: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); file_put_contents(DataRepository::DATAFILE, json_encode($this->data)); $this->log("Datafile written: ".DataRepository::DATAFILE." (line: ".__LINE__.")"); } private function loadTheData() { $this->log("Attempting to load the datafile: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); $this->data = json_decode(file_get_contents(DataRepository::DATAFILE), true); $this->log("Datafile loaded: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $this->data); } private function validateFields(&$info, $fields, $pre_errors = NULL) { // merge in any pre_errors if($pre_errors != NULL) { $this->errors = array_merge($this->errors, $pre_errors); } // check all required fields foreach($fields AS $field => $reqs) { if(isset($reqs['required']) AND $reqs['required'] == 1) { if(!isset($info[$field]) OR strlen($info[$field]) == 0) { $this->errors[] = "$field is a REQUIRED field"; } } // set any default values if not present if(isset($reqs['default']) AND (!isset($info[$field]) OR $info[$field] == "")) { $info[$field] = $reqs['default']; } } $this->showErrors(); if(count($this->errors) == 0) { return TRUE; } else { return FALSE; } } private function validateUser(&$user_info) { // check if the email is already in use $this->log("About to check pre_errors: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $user_info); $pre_errors = NULL; if(isset($user_info['email'])) { if(!$this->isEmailUnique($user_info['email'])) { $pre_errors = array('The email: '.$user_info['email'].' is already used in our system'); } } $this->log("After pre_error check: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $pre_errors); return $this->validateFields($user_info, $this->user_fields, $pre_errors); } private function validatePost(&$post_info) { // check if the user_id in the post actually exists $this->log("About to check pre_errors: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $post_info); $pre_errors = NULL; if(isset($post_info['user_id'])) { if(!isset($this->data['users'][$post_info['user_id']])) { $pre_errors = array('The posts must belong to a valid user. (User '.$post_info['user_id'].' does not exist in the data'); } } $this->log("After pre_error check: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")", $pre_errors); return $this->validateFields($post_info, $this->post_fields, $pre_errors); } private function log($message, $data = NULL) { $style = "background-color: #F8F8F8; border: 1px solid #DDDDDD; border-radius: 3px; font-size: 13px; line-height: 19px; overflow: auto; padding: 6px 10px;"; if(DataRepository::$debug) { if($data != NULL) { $dump = "<div style=\"$style\"><pre>".json_encode($data, JSON_PRETTY_PRINT)."</pre></div>"; } else { $dump = NULL; } echo "<code><b>Debug:</b> $message</code>$dump<br />"; } } public function saveUser($user_info) { $this->log("Entering saveUser: (".__FUNCTION__.":".__LINE__.")", $user_info); $mydata = array(); $update = FALSE; // check for existing data if(isset($user_info['id']) AND $this->data['users'][$user_info['id']]) { $mydata = $this->data['users'][$user_info['id']]; $this->log("Loaded prior user: ".print_r($mydata, TRUE)." (".__FUNCTION__.":".__LINE__.")"); } // copy over existing values $this->log("Before copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); foreach($user_info AS $k => $v) { $mydata[$k] = $user_info[$k]; } $this->log("After copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); // check required fields if($this->validateUser($mydata)) { // hash password if new if(isset($mydata['password'])) { $mydata['password'] = sha1($mydata['password']); } // if no id, add the next available one if(!isset($mydata['id']) OR (int)$mydata['id'] < 1) { $this->log("No id set: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); if(count($this->data['users']) == 0) { $mydata['id'] = 1; $this->log("Setting id to 1: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } else { $mydata['id'] = max(array_keys($this->data['users']))+1; $this->log("Found max id and added 1 [".$mydata['id']."]: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } } // set created date if null if(!isset($mydata['created_at'])) { $mydata['created_at'] = date ("Y-m-d H:i:s", time()); } // update modified time $mydata['modified_at'] = date ("Y-m-d H:i:s", time()); // copy into data and save $this->log("Before data save: (".__FUNCTION__.":".__LINE__.")", $this->data); $this->data['users'][$mydata['id']] = $mydata; $this->writeTheData(); } return TRUE; } public function getUserById($id) { if(isset($this->data['users'][$id])) { return $this->data['users'][$id]; } else { return array(); } } public function isEmailUnique($email) { // find the user that has the right username/password foreach($this->data['users'] AS $k => $v) { $this->log("Checking unique email: {$v['email']} == $email (".__FUNCTION__.":".__LINE__.")", NULL); if($v['email'] == $email) { $this->log("FOUND NOT unique email: {$v['email']} == $email (".__FUNCTION__.":".__LINE__.")", NULL); return FALSE; break; } } $this->log("Email IS unique: $email (".__FUNCTION__.":".__LINE__.")", NULL); return TRUE; } public function login($username, $password) { // hash password for validation $password = sha1($password); $this->log("Attempting to login with $username / $password: (".__FUNCTION__.":".__LINE__.")", NULL); $user = NULL; // find the user that has the right username/password foreach($this->data['users'] AS $k => $v) { if($v['email'] == $username AND $v['password'] == $password) { $user = $v; break; } } $this->log("Exiting login: (".__FUNCTION__.":".__LINE__.")", $user); return $user; } public function savePost($post_info) { $this->log("Entering savePost: (".__FUNCTION__.":".__LINE__.")", $post_info); $mydata = array(); // check for existing data if(isset($post_info['id']) AND $this->data['posts'][$post_info['id']]) { $mydata = $this->data['posts'][$post_info['id']]; $this->log("Loaded prior posts: ".print_r($mydata, TRUE)." (".__FUNCTION__.":".__LINE__.")"); } $this->log("Before copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); foreach($post_info AS $k => $v) { $mydata[$k] = $post_info[$k]; } $this->log("After copying over existing values: (".__FUNCTION__.":".__LINE__.")", $mydata); // check required fields if($this->validatePost($mydata)) { // if no id, add the next available one if(!isset($mydata['id']) OR (int)$mydata['id'] < 1) { $this->log("No id set: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); if(count($this->data['posts']) == 0) { $mydata['id'] = 1; $this->log("Setting id to 1: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } else { $mydata['id'] = max(array_keys($this->data['posts']))+1; $this->log("Found max id and added 1 [".$mydata['id']."]: ".DataRepository::DATAFILE." (".__FUNCTION__.":".__LINE__.")"); } } // set created date if null if(!isset($mydata['created_at'])) { $mydata['created_at'] = date ("Y-m-d H:i:s", time()); } // update modified time $mydata['modified_at'] = date ("Y-m-d H:i:s", time()); // copy into data and save $this->data['posts'][$mydata['id']] = $mydata; $this->log("Before data save: (".__FUNCTION__.":".__LINE__.")", $this->data); $this->writeTheData(); } return TRUE; } public function getAllPosts() { return $this->loadPostsUsers($this->data['posts']); } public function loadPostsUsers($posts) { foreach($posts AS $id => $post) { $posts[$id]['user'] = $this->getUserById($post['user_id']); } return $posts; } public function dump($line_number, $temp = 'NO') { // if(DataRepository::$debug) { if($temp == 'NO') { $temp = $this->data; } echo "<pre>Dumping from line: $line_number\n"; echo json_encode($temp, JSON_PRETTY_PRINT); echo "</pre>"; } } } /* * Change Log * * 1.0.0 * - first version * 1.0.1 * - Added isEmailUnique() function for form validation and precheck on user save * 1.0.2 * - Fixed getAllPosts() to include the post's user info * - Added loadPostsUsers() to load one or more posts with their user info * 1.0.3 * - Added autoload to always add admin Joe. */

    Read the article

  • How to detect invalid image URL with JAVA?

    - by Cataclysm
    I have a method to download image from URL. As like below.. public static byte[] downloadImageFromURL(final String strUrl) { InputStream in; ByteArrayOutputStream out = new ByteArrayOutputStream(); try { URL url = new URL(strUrl); in = new BufferedInputStream(url.openStream()); byte[] buf = new byte[2048]; int n = 0; while (-1 != (n = in.read(buf))) { out.write(buf, 0, n); } out.close(); in.close(); } catch (IOException e) { return null; } return out.toByteArray(); } I have an image url and it is valid. for example. https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcg My problem is I don't want to download if image is really not exists.Like .... https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTxfYM-hnD-Z80tgWdIgQKchKe-MXVUfTpCw1R5KkfJlbRbgr3Zcgaaaaabbbbdddddddddddddddddddddddddddd This image shouldn't be download by my method. So , how can I know the giving image URL is not really exists. I don't want to validate my URL (I think that may not my solution ). So, I googled for that. From this article ... How to check if a URL exists or returns 404 with Java? and Java check if file exists on remote server using its url But this con.getResponseCode() will always return status code "200". This mean my method will also download invalid image urls. So , I output my bufferStream as like... System.out.println(in.read(buf)); Invalid image URL produces "43". So , I add these lines of codes in my method. if (in.read(buf) == 43) { return null; } It is ok. But I don't think that will always satisfy. Has another way to get it ? am I right? I would really appreciate any suggestions. This problem may struct my head. Thanks for reading my question.

    Read the article

  • bind9 DNS Ubuntu names pingible on server, but not on Windows Machines?

    - by leeand00
    I setup a DNS server today on Ubuntu, following this tutorial. My intent was to setup my network for dns-name resolving on the private LAN within a single zone (nothing fancy I just want name resolution). I've tested the setup on the DNS server machine itself, and I can ping all the machines listed in the configuration file. I've also configured the Windows Machines on my network, and for some reason they are incapable of pinging by names as was possible on the DNS Server itself. I've tried running nslookup on the Windows DNS clients and I receive and error mentioning the address of the DNS server. DNS forwarding works fine, I'm not having any trouble accessing the internet, the problem only lies within accessing names within the private LAN. Here are my configuration files: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; forwarders { 8.8.8.8; 8.8.8.4; 74.242.0.12; //68.87.76.178; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.options zone "leerdomain.local" { type master; file "/etc/bind/zones/leerdomain.local.db"; notify no; }; zone "2.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.2.168.192.in-addr.arpa"; notify no; }; /etc/bind/named.conf.local Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 2010011001 28800 3600 604800 38400 ); leerdomain.local. IN NS ns.leerdomain.local. ns IN A 192.168.2.9 asus IN A 192.168.2.254 www IN CNAME asus vaio IN A 192.168.2.253 iptouch IN A 192.168.2.252 toshiba IN A 192.168.2.251 gw IN A 192.168.2.1 TXT "Network Gateway" /etc/bind/zones/leerdomain.local.db (Validates fine with named-checkzone when validating zone leerdomain.local) Reverse Lookup: $TTL 3D @ IN SOA ns.leerdomain.local. admin.leerdomain.local. ( 201001101 28800 604800 604800 86400 ) IN NS ns.leerdomain.local. 1 IN PTR gw.leerdomain.local. 254 IN PTR asus.leerdomain.local. 253 IN PTR vaio.leerdomain.local. 252 IN PTR iptouch.leerdomain.local. 251 IN PTR toshiba.leerdomain.local. /etc/bind/zones/rev.2.168.192.in-addr.arpa *(Does not validate with named-checkzone when validating zone leerdomain.local gives an error of: zone leerdomain.local/IN: NS 'ns.leerdomain.local' has no address records (A or AAAA) zone leerdomain.local/IN: not loaded due to errors. * Despite not validating bind9 starts without errors in /var/log/syslog I've also configured a few of the windows machines on my network to have the static ip as specified in the lookup and reverse lookup config files. i.e. Using nslookup yields the following results: C:\Users\leeand00>nslookup ns Server: UnKnown Address: 192.168.2.9 *** UnKnown can't find ns: Non-existent domain C:\Users\leeand00>nslookup gw Server: UnKnown Address: 192.168.2.9 Name: gw. Additionally trying to ping by name also fails on machines that are not the DNS Server. Is there something wrong with my configuration of either the nameserver or the Windows Boxes that is keeping me from accessing other machines using names?

    Read the article

  • Installing .NET Framework 4 Client Profile breaks Windows Update

    - by Richard
    I have a Samsung NC-10 netbook with a fresh install of Windows 7 Home Premium 32-bit (it only had 2GB). If Microsoft .NET Framework 4 Client Profile is installed on it, Windows Update will always return error code 8024402F ("Windows Update encountered an unknown error"). As soon as I uninstall it, Windows Update works just fine again. Out of the four computers in this house, only this netbook has the problem. My question is: How can I get the .NET Framework 4 Client Profile installed on my netbook and continue to have a functioning Windows Update? ----- More information ----- The hard-drive recently died on my netbook so I replaced it with a nice new SSD and did a fresh installation of Windows 7 Home Premium (SP1) - along with all the updates. At some point I found that, when I ran Windows Update, I was greeted with error code 8024402F ("Windows Update encountered an unknown error"). Looking in C:\Windows\WindowsUpdate.log, I saw the following issue: WARNING: ECP: Failed to validate cab file digest downloaded from http://download.windowsupdate.com/msdownload/update/software/dflt/2012/02/4913552_4a5c9563d1f58c77f30d0d5c9999e4b8bff3ab21.cab with error 0x80091007 WARNING: ECP: This roundtrip contained some optimized updates which failed. New Update count = 0, Old Count = 3 FATAL: ProcessCoreMetadata did not return any update to be committed WARNING: Sync of Updates: 0x8024402f WARNING: SyncServerUpdatesInternal failed: 0x8024402f When I downloaded the CAB from the URL listed and opened it, it contained a file called 4913552.txt. A search on Google suggested that it's related to Microsoft .NET Framework 4 Client Profile. Other people had reported problems with it breaking Windows Update, but they were running Windows XP. I tried the steps outlined on the Microsoft site for this error code, but it reported that there was nothing wrong. I also found this superuser question, I tried all the answers listed but none of them made any difference. My router doesn't block ActiveX, changing my internet settings in IE made no difference, assuming it was a corrupted update repository didn't do anything (except wipe my update history), my date and time was correct, switching to Google's DNS didn't work and neither did disabling IPv6. Figuring that this update was corrupted, I repaired it and nothing changed. In desperation I un-installed it and Windows Update started working again! Brilliant! I then downloaded the full version from the Microsoft website, installed it and, thankfully, Windows Update continued to work just fine. A week later I turn on my netbook and Windows Update is broken again with exactly the same error message and log entries as before. Repairing .NET Framework 4 Client Profile did nothing, removing it entirely solved the problem again. Thinking this might be some odd Windows installation issue, I formatted the hard-drive and re-installed Windows. Same problem as before - as soon as .NET Framework 4 Client Profile ended up on the netbook, Windows Update stopped working and reported error 8024402F. As soon as it was un-installed, everything worked just fine again. There are three other machines in this house and all of them have working Windows Update and this Client Profile. Does anyone know why it causes this netbook to break and, more importantly, how I can fix it?

    Read the article

  • Specifying a Postfix Instance to send outbound email

    - by Catherine Jefferson
    I have a CentOS 6.5 server running Postfix 2.6x (the default distribution) with five public IPv4 IPs bound to it. Each IP has DNS and rDNS set separately. Each uses a different hostname at a different domain. I have five Postfix instances, one bound to each IP, like this example: 192.168.34.104 red.example.com /etc/postfix 192.168.36.48 green.example.net /etc/postfix-green 192.168.36.49 pink.example.org /etc/postfix-pink 192.168.36.50 orange.example.info /etc/postfix-orange 192.168.36.51 blue.example.us /etc/postfix-blue I've tested each IP by telneting to port 25. Postfix answers and banners properly with the correct hostname. Email is received on all of these instances with no problems and is routed to the correct place. This setup, minus the final instance, has existed for a couple of years and works. I never bothered to set up outbound email to go through any but the main instance, however; there was no need. Now I need to send email from blue.example.us that actually leaves from that interface and IP, such that the Received headers show blue.example.us as the sending mailhost, so that SPF and DKIM validate, etc etc. The email that will be sent from blue.example.com is a feedback loop sent by a single shell account on the server (account5), an account that is dedicated to sending this email. The account receives the feedback loop emails from servers on other networks, saves the bodies of those emails, and then generates a new outbound email header, appends the saved body, and sends the email. It's sending by piping each email to sendmail -oi -t. We're doing it this way to mask the identities of the initial servers. The procmail script that processes these emails works correctly. However, I cannot configure this account to send email through the proper Postfix instance/IP/interface. The exact same account and script sends email through the main Postfix instance /etc/postfix without any issues. When I change MAIL_CONFIG to point to /etc/postfix-blue in either .bash_profile or the Procmail script that handles this email, though, I get this error: sendmail: fatal: User account5(###) is not allowed to submit mail I've read the manuals on Postfix.org, searched Google, and tried the suggestions in three previous answers here on ServerFault.com: Postfix - specify interface to deliver outbound mail on Postfix user is not allowed to submit mail Postfix rejects php mails I have been careful to stop and restart Postfix after each configuration change, and tested the results. Nothing has worked. The main postfix instance happily accepts outbound email from account5. The postfix-blue instance continues to reject email from account5 with the sendmail error above. As tempting as it is to blame machine hostility, I know that I must be missing something or doing something wrong. Does anybody have any suggestions as to what it might be? Please feel free to ask for further information about my setup if you need it. =-=-=-=-=-=-=-=-=-= At the request of the responder, here are main.cf and master.cf for a) the main postfix instance ("red.example.com") and b) the FBL instance ("blue.example.us") [NOTE: All parameters not specified below were left at the default Postfix 2.6 settings] MAIN: master.cf smtp inet n - n - - smtpd main.cf myhostname = red.example.com mydomain = example.com inet_interfaces = $myhostname, localhost inet_protocols = all lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname, localhost.$mydomain, localhost local_recipient_maps = mynetworks = 192.168.34.104/32 relay_domains = example.com, example.info, example.net, example.org, example.us relayhost = [192.168.34.102] # Separate physical server, main mailserver. relay_recipient_maps = hash:/etc/postfix/relay_recipients alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases smtpd_banner = $myhostname ESMTP $mail_name multi_instance_wrapper = ${command_directory}/postmulti -p -- multi_instance_enable = yes multi_instance_directories = /etc/postfix-green /etc/postfix-pink /etc/postfix-orange /etc/postfix-blue FBL: master.cf 184.173.119.103:25 inet n - n - - smtpd main.cf myhostname = blue.example.us mydomain = blue.example.us <= Deliberately set to subdomain only. myorigin = $mydomain inet_interfaces = $myhostname lmtp_host_lookup = native smtp_host_lookup = native ignore_mx_lookup_error = yes mydestination = $myhostname local_recipient_maps = unix:passwd.byname $alias_maps $virtual_alias_maps mynetworks = 192.168.36.51/32, 192.168.35.20/31 <= Second IP is backup MX servers relay_domains = $mydestination recipient_canonical_maps = hash:/etc/postfix-blue/canonical virtual_alias_maps = hash:/etc/postfix-fbl/virtual alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical alias_maps = hash:/etc/aliases, hash:/etc/postfix-blue/canonical mailbox_command = /usr/bin/procmail -a "$EXTENSION" DEFAULT=$HOME/Mail/ MAILDIR=$HOME/Mail smtpd_banner = $myhostname ESMTP $mail_name authorized_submit_users = multi_instance_name = postfix-blue multi_instance_enable = yes

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.1 MARCH 2010

    - by Tim Dexter
    Latest roll up patch for 10.1.3.4.1 is now out in the wild. Yep, there are bug fixes but the guys have implemented some great enhancements. I'll be covering some of them over the coming weeks, from collapsing bookmarks in your PDFs to better MS AD support to 'true' Excel templates, yes you read that correctly! Patch is available from Oracle's support site. Just search for patch 9546699. Here's the contents and readme, apologies for the big list but at least you can search against it for a particular fix. This patch contains backports of following bugs for BI Publisher Enterprise 10.1.3.4.0 and 10.1.3.4.1. 6193342 - REG:SAMPLE DATA FILE FOR PDF FORM MAPPING IS NOT VALIDATED 6261875 - ERRONEOUS PRECISION VALIDATION ON ONLINE ANALYZER 6439437 - NULL POINTER EXCEPTION WHEN PROCESSING TABLE OF CONTENT 6460974 - BACS EFT PAYMENT INSTRUCTION OUTPUT FILE IS EMPTY 6939721 - BIP: REPORT BUSTING DELIVERY KEY VALUES CANNOT CONTAIN SEVERAL SPECIAL CHARACTER 6996069 - USING XML DB FOR BI REPOSITORY FAILS WITH RESOURCENOTFOUNDEXCEPTION 7207434 - TIMEZONE:SHOULD NOT DO TIMEZONE CONVERSION AGAINST CANONICAL DATE YYYY-MM-DD 7371531 - SUPPORT FOR CSV OUTPUT FOR STRUCTURED XML AND NON SQL DATA SOURCES 7596148 - ER: LDAP FOR MS AD TO SEARCH FROM AD ROOT 7646139 - WEBSERVICES ERROR 7829516 - BIP STANDALONE FAILS TO BURST USING XSL-FO TEMPLATES 8219848 - PDF TEMPLATE REPORT NOT PERFORMING PAGE BREAK 8232116 - PARAMETER VALUE IS PASSED AS NULL,IF IT CONTAINS 'AND' WITHIN THE STRING 8250690 - NOT ABLE TO UPLOAD TEMPLATE VIA BIP API 8288459 - ER: QUERY BUILDER OPTION TO NOT INCLUDE TABLENAME. PREFIX IN SQL 8289600 - REPORT TITLE AND DESCRIPTION CAN'T SUPPORT MULTIPLE LANGUAGES 8327080 - CAN NOT CONFIGURE ORACLE EBUSINESS SUITE SECURITY MODEL WITH ORACLE RAC 8332164 - AN XDO PROPERTY TO ENABLE DEBUG LOGGING 8333289 - WEB SERVICE JOBS FAIL AFTER BIP STARTED UP 8340239 - HTTP NOTIFY IS MISSING IN SCHEDULEREPORTREQUEST 8360933 - UNABLE TO USE LOGGED IN BI USER AS THE WSSECURITY USERNAME IN A VARIABLE FORMAT 8400744 - ADMINISTRATOR USER DOES NOT HAVE FULL ADMINISTRATOR RIGHTS 8402436 - CRASH CAUSED BY UNDETERMINED ATTRKEY ERROR IN MULTI-THREADED 8403779 - IMPOSSIBLE TO CONFIGURE PARAMETER FOR A REPORT 8412259 - PDF, RTF OUTPUT NOT HANDLING THE TABLE BORDER AND CONTENT OVERFLOWS TO NEXT PAGE 8483919 - DYNAMIC DATASOURCE WEBSERVICE SHOULD WORK WITH SERVERSIDE CONNECTIONS 8444382 - ID ATTRIBUTE IN TITLE-PAGE DOES NOT WORK WITH SELECTACTION PROPERTY 8446681 - UI LANGUAGE IS NOT REFLECTED AT THE FIRST LOG IN 8449884 - PUBLICREPORTSERVICE FAILS ON EMAIL DELIVERY USING BIP 10.1.3.4.0D+ - NPE 8454858 - DB: XMLP_ADMIN CAN SEE ALL THE FOLDERS BUT ONLY HAS VIEW PERMISSIONS 8458818 - PDFBOOKBINDER FAILS WITH OUTOFMEMORY ERROR WHEN TRYING TO BIND > 1500 PDFS 8463992 - INCORRECT IMPLEMENTATION OF XLIFF SPECIFICATION 8468777 - BI PUBLISHER QUERY BUILDER NOT LOADING SCHEMA OBJECTS 8477310 - QUERY BUILDER NOT WORK WITH SSL ON STANDALONE OC4J 8506701 - POSITIVE PAY FILE WITH OPTIONS NOT CREATING FILE CHECKS OVER 2500 8506761 - PERFORMANCE: PDFBOOKBINDER CLASS TAKES 4 HOURS TO BIND 4000 PAGES 8535604 - NPE WHEN CLICKING "ANALYZER FOR EXCEL" BUTTON IN ALL_* REPORTS 8536246 - REMOVE-PDF-FIELDS DOES NOT WORK WITH CHECKBOXES WITH OPT ARRAY 8541792 - NULLPOINTER EXCEPTION WHILE USING SFTP PROTOCOL 8554443 - LOGGING TIME STAMP IN 10G: THE HOUR PART IS WRONG 8558007 - UNABLE TO LOGIN BIP WITH UNPRIVILEGED USER WHEN XDB IS USED FOR REPORSITORY STOR 8565758 - NEED TO CONNECT IMPERSONATION TO DATA SOURCE WITH PL/SQL FUNCTION 8567235 - EFTPROCESSOR AND XDO DEBUG ENABLED CAUSES ORG.XML.SAX.SAXPARSEEXCEPTION 8572216 - EFTPROCESSOR NOT THREAD SAFE - CAUSING CORRUPTED REPORTS TO BE GENERATED 8575776 - LANDSCAPE REPORT ORIENTAION NOT SELECTED WHEN REPORT IS PRINTED WITH PS 8588330 - XLIFF GENERATING WITH WRONG MAXWIDTH ATTRIBUTE IN SOME TRANS-UNITS 8584446 - EFTGENERATOR DOES NOT USE XSLT SCALABILITY - JAVA.LANG.OUTOFMEMORY EXCEPTION 8594954 - ENG: BIP NOTIFY MESSAGE BECOMES ENGLISH 8599646 - ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 8605110 - PDFSIGNATURE API ENCOUNTERS JAVA.LANG.NULLPOINTEREXCEPTION ON PDF WITH WATERMARK 8660915 - BURSTING WITH DATA TEMPLATE NOT WORKING WITH OPTION: VALUE=FALSE 8660920 - ER: EXTRACT XHTML DATA USING XDODTEXE IN XHTML FORMAT 8667150 - PROBLEM WITH 3RD APPLICATION ABOUT PDF GENERATED WITH BI PUBLISHER 8683547 - "CLICK VIEW REPORT BUTTON TO GENERATE THE REPORT" MESSAGE IS DISPLAYED 8713080 - SEARCH" PARAMETER IS NOT SHOWING NON ENGLISH DATA IN INTERNET EXPLORER 8724778 - EXCEL ANALYZER PARAMETERS DO NOT WORK WITH EXCEL 2007 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 8728807 - DYNAMIC JDBC DATA SOURCE WITH PRE-PROCESS FUNCTION BASED ON EXISTING DATA SOURCE 8759558 - XDO TEMPLATE SHOWS CURRENCY IN WRONG FORMAT FOR DUNNING 8792894 - EFTPROCESSOR DOES NOT SUPPORT XSL TEMPLATE AS INPUTSTREAM 8793550 - BIP GENERATES CSV REPORTS OUTPUT FORMAT WITH EXTENTION .OUT NOT .CSV IN EMAIL 8819869 - PERIOD CLOSE VALUE SUMMARY REPORT (XML) RUNNING INTO WARNING 8825732 - MY FOLDERS LINK BROKEN WITH USER NAME THAT INCLUDES A SLASH (/) OBIEE SECURITY 8831948 - TRYING TO GENERATE A SCATTER PLOT USING THE CHART WIZARD 8842299 - SEEDED QUERY ALWAYS RETURNS RESULTS BASED ON FIRST COLUMN 8858027 - NODE.GETTEXTCONTEXT() NOT AVAILABLE IN 10G UNDER OC4J 8859957 - REPORT TITLE ALIGNMENT GOES BAD FOR REPORTS WITH XLIFF FILE ATTACHED 8860957 - ER: IMPROVE PERFORMANCE OF ANSWERS PARAMETERS 8891537 - GETREPORTPARAMETERS WEB SERVICE API ISSUES WITH OAAM REPORTS 8891558 - GETTING SQLEXCEPTION IN GENERATEREPORT WEB SERVICE API ON OAAM REPORTS 8927796 - ER: DYANAMIC DATA SOURCE SUPPORT BY DATA SOURCE NAME 8969898 - BI PUBLISHER WEB SERVICE GETREPORTPARAMETERS DOES NOT TRANSLATE PARAMETER LABEL 8998967 - MULTIPLE XSL PREDICATES ELEMENT[A='A'] [B='B'] CAUSES XML-22019 ERROR 9012511 - SCALABLE MODE IS NOT WORKING IN XMLPUBLISHER 10.1.3.4 9016976 - ER: PRINT XSL-T AND FOPROCESSING TIMING INFORMATION 9018580 - WEB SERVICE CALL FAILS WHEN REPORT INCLUDES SEARCH TYPE 9018657 - JOB FAILS WHEN LOV QUERY CONTAINS BIND VARIABLES :XDO_USER_UI_LOCALE 9021224 - PERFORMANCE ISSUE TO VIEW DASHBOARD PAGE WITH BIP REPORT LINKS 9022440 - ER: SUPPORT "COMB OF N CHARACTERS" FEATURE PDF FORM TEXT FIELDS 9026236 - XPATH DOES NOT WORK CORRECTLY IN 10.1.3.4.1 9051652 - FILE EXTENSION OF CSV OUTPUT IS TXT WHEN IT IS EXPORTED FROM REPORT VIEWER 9053770 - WHEN SENDING CSV REPORT OUTPUT BY EMAIL SOMETIMES IT IS SENT WITHOUT EXTENSION 9066483 - PDFBOOKBINDER LEAVE SOME TEMPORARY FILES AFTER MERGING TITLE PAGE OR TOC 9102420 - USE RELATIVE PATHS IN HYPERLINKS 9127185 - CHECKBOX NOT WORK ON SUB TEMPLATE 9149679 - BASE URL IS NOT PASSED CORRECTLY 9149691 - PROVIDE A WAY TO DISABLE THE ABILITY TO CREATE SCHEDULED REPORT JOB "PUBLIC" 9167822 - NOTIFICATION URL BREAKS ON FOLDER NAMES WITH SPACES 9167913 - CHARTS ARE MISSING IN PDF OUTPUTS WHEN THE DEFAULT OUTPUT FORMAT IS NOT A PDF 9217965 - REPORT HISTORY TAKES LONG TIME TO RENDER THE PAGE 9236674 - BI PUBLISHER PARAMETERS DO NOT CASCADE REFRESH AFTER SECOND PARAMETER 9283933 - OPTION TO COLLAPSE PDF OUTPUT BOOKMARKS BY DEFAULT 9287245 - SAVE COMPLETED SCHEDULED REPORTS IN ITS REPORT NAME AND NOT IN A GENERIC NAME 9348862 - ADD FEATURE TO DISABLE THE XSLT1.0-COMPATIBILITY IN RTF TEMPLATE 9355897 - ER: NEED A SAFE DIVIDE FUNCTION 9364169 - UIX 2.3.6.6 PATCH UPTAKE FOR 10.1.3.4.1 9365153 - LEADING WHITESPACE CHARACTERS IN A FIELD TRIMMED WHEN RUN VIEW OR EXPORT TO .CSV 9389039 - LONG TEXT IS NOT WRAPPED PROPERLY IN THE AUTOSHAPE ON RTF TEMPLATE 9475697 - ENH: SUB-TEMPLATE:DYNAMIC VARIABLE WITH PARAMETER VALUE IN CALL-TEMPLATE CLAUSE 9484549 - CHANGE DEFAULT FOR "XSLT1.0-COMPATIBILITY" TO FALSE FOR 10G 9508499 - UNABLE TO READ EXCEL FILE IF MORE THAN 1800 ROWS GENERATED 9546078 - EMAIL DELIVERY INFORMATION SHOULD NOT BE SAVED AND AUTO-FED IN JOB SUBMISSION 9546101 - EXCEPTION OCCURS WHEN SFTP/FTP REMOTE FILENAME DOSE NOT CONTAIN A SLASH '/' 9546117 - SFTP REPORT DELIVERY FAILS WITH NO CLASS DEF FOUND EXCEPTION ON WEBLOGIC 9.2 Following bugs are included in 10.1.3.4.1 and they are only applied to 10.1.3.4.0. 4612604 - FROM EDGE ATTRIBUTE OF HEADER AND FOOTER IS NOT PRESERVED 6621006 - PARAMNAMEVALUE ELEMENT DEFINITION SHOULD HAVE PARAMETER TYPE 6811967 - DATE PARAMETER NOT HANDLING DATE OFFSET WHEN PASSED UPPERCASE Z FOR OFFSET 6864451 - WHEN BIP REPORTS TIMEOUT, THE PROCESS TO LOG BACK IN IS NOT USER FRIENDLY 6869887 - FUSION CURRENCY BRD:4.1.4/4.1.6 OVERRIDINDG MASK /W XSLT._XDOCURMASKS /W SYMBOL 6959078 - "TEXT FIELD CONTAINS COMMA-SEPARATED VALUES" DOESN'T WORK IN CASE OF STRING 6994647 - GETTING ERROR MESSAGE SAYING JOB FAILED EVEN THOUGH WORKS OK IN BI PUBLISHER 7133143 - ENABLE USER TO ENTER 'TODAY' AS VALUE TO DATE PARAMETER IN SCHEDULE REPORT UI 7165117 - QA_BIP_FUNC:-CLOSED LIFE TIME REPORT ERROR MESSAGE IN CMD 7167068 - LEADER-LENGTH OR RULE-THICKNESS PROPRTY IS TOO LARGE 7219517 - NEED EXTENSION FUNCTIONS TO URL ENCODE TEXT STRING. 7269228 - TEMPLATEHELPER PRODUCES A GARBLED OUTPUT WHEN INVOKED BY MULTIPLE THREADS 7276813 - GETREPORTPARAMETERSRETURN ELEMENT SHOULD HAVE DEFAULT VALUE 7279046 - SCHDEULER:UNABLE TO DELETE A JOB USING API 7280336 - ER: BI PUBLISHER - SITEMINDER SUPPORT - GENERIC NON-ORACLE SSO SUPPORT 7281468 - MODIFY SQL SERVER PROPERTIES TO USE HYP DATA DIRECT IN JDBCDEFAULTS.XML 7281495 - PLEASE ADD SUPPORTED DBS TO JDBCDEFAULT.XML AND LIST EACH DB VERSION SEPARATELY 7282456 - FUSION CURRENCY BRD 4.1.9.2: CURRENCY AMOUTS SHOULD NOT BE WRAPPED. MINUS SIGN 7282507 - FUSION CURRENCY BRD4.1.2.5:DISPLAY CURRENCY AND LOCALE DERIVED CURRENCY SYMBOL 7284780 - FUSION CURRENCY BRD 4.1.12.4 CORRECTLY ALIGN NEGATIVE CURRENCY AMOUNTS 7306874 - OPP ERROR - JAVA.LANG.OUTOFMEMORYERROR: ZIP002:OUTOFMEMORYERROR, MEM_ERROR 7309596 - SIEBELCRM: BIP ENHANCEMENT REQUEST FOR SIEBEL PARAMETERIZATION 7337173 - UI LOCALE IS ALWAYS REWRITTEN TO EN WHEN MOVE FROM DASHBOARD 7338349 - REG:ANALYZER REPORT WITH AVERAGE FUNCTION FAIL TO RUN FOR NON INTERACTIVE FORMAT 7343757 - OUTPUT FORMAT OF TEMPLATES IS NOT SAVING 7345989 - SET XDK REPLACEILLEGALCHARS AND ENHANCE XSLTWRAPPER WARNING 7354775 - UNEXPECTED BEHAVIOR OF LAYOUT TEMPLATE PARAMETER OF RUNREPORT WEBSERVICES API 7354798 - SEQUENCE ORDER OF PARAMETERS FOR THE RUNREPORT WEBSERVICES API 7358973 - PARALLEL SFTP DELIVERY FAILS DUE TO SSHEXCEPTION: CORRUPT MAC ON INPUT 7370110 - REGN:FAIL WHEN USE JNDI TO XMLDB REPORT REPOSITORY 7375859 - NEW WEBSERVICE REQUIRED FOR RUNREPORT 7375892 - REQUIRE NEW WEBSERVICE TO CHECK IF REPORTFOLDER EXISTS 7377686 - TEXT-ALIGN NOT APPLIED IN PDF IN HEBREW LOCALE 7413722 - RUNREPORT API DOES NOT PASS BACK ANY GENERATED EXCEPTIONS TO SCHEDULEREPORT 7435420 - FUSION CURRENCY: SUPPORT MICROSOFT(JAVA) FORMAT MASK WITH CURRENCY 7441486 - ER: ADD PARAMETER FOR SFTP TO BURSTING QUERY 7458169 - SSO WITH OID LDAP COULD NOT FETCH OID ROLES 7461161 - EMAIL DELIVERY FAILS - DELIVERYEXCEPTION: 0 BYTE AVAILABLE IN THE GIVEN INPU 7580715 - INCORRECT FORMATTING OF DATES IN TIMEZONE GMT+13 7582694 - INVALID MAXWIDTH VALUE CAUSES NLS FAILURES 7583693 - JAVA.LANG.NULLPOINTEREXCEPTION RAISED WHEN GENERATING HRMS BENEFITS PDF REPORT 7587998 - NEWLY CREATED USERS IN OID CANT ACCESS REPORTS UNTILL BI PUBLISHER IS RESTARTED 7588317 - TABLE OF CONTENT ALWAYS IN THE SAME FONT 7590084 - REMOVING THE BIP ENTERPRISE BANNER BUT KEEPING THE REPORTS & SCHEDULES TAB 7590112 - SOMEONE NOT PRIVILEGED ACCESS BIP DIRECTLY SHOULD GET A CUSTOM PAGE 7590125 - AUTOMATING CREATION OF USERS AND ROLES 7597902 - TIMEZONE SUPPORT IN RUNREPORT WEBSERVICE API 7599031 - XML PUBLISHER SUM(CURRENT-GROUP()) FAILS 7609178 - ISSUE WITH TAGS EXTRACTED FROM RTF TEMPLATE 7613024 - HEADER/FOOTER SETTINGS OF RTF TEMPLATE ARE NOT RETAINING IN THE RTF OUTPUT 7623988 - ADD XSLT FUNCTION TO PRINT XDO PROPERTIES 7625975 - RETRIEVING PARAMETER LOV FROM RTF TEMPLATE 7629445 - SPELL OUT A NUMBER INTO WORDS 7641827 - ANALYTICS FROZED AFTER PAGE TAB WHICH INCLUDES [BI PUBLISHER REPORT] WERE CLICKE 7645504 - BIP REPORT FROZED AFTER THE SAME DASHBOARD BIP REPORTS WERE CLICKED SIMULTANEOUS 7649561 - RECEIVE 'TO MANY OPEN FILE HANDLES' ERROR CAUSING BI TO CRASH 7654155 - BIP REMOVES THE FIRST FILE SEPARATOR WHEN RE-ENTER REPOSITORY LOCATION IN ADMIN 7656834 - NEED AN OPTION TO NOT APPEND SCHEMA NAME IN GENERATED QUERY 7660292 - ER: XDOPARSER UPGRADE TO XDK 11G 7687862 - BIP DATA EXTRACTING ENHANCEMENT FOR SIEBEL BIP INTEGRATION 7694875 - ADMINISTRATOR IS SUPER USER WHETHER CONFIGURED MANDATORY_USER_ROLE OR NOT 7697592 - BI PUBLISHER STRINGINDEXOUTOFBOUNDSEXCEPTION WHEN PRINTING LABEL FROM SIM 7702372 - ARABIC/ENGLISH NUMBER/DATE PROBLEM, TOTAL PAGE NUMBER NOT RENDERED IN ENGLISH 7707987 - OUTOFMEMORY BURSTING A BI PUBLISHER REPORT BI SERVER DATA SOURCE 7712026 - ER: CHANGE CHART OUTPUT FORMAT TO PNG IN HTML OUTPUT 7833732 - THE 'SEARCH' PARAMETER TYPE CANNOT BE USED IN IE6 UNDER WINDOWS 8214839 - ER: INCREASE COLUMN SIZE IN SCHEDULER TABLE XMLP_SCHED_JOB 8218271 - ISSUES WHILE CONVERTING EXCEL TO XML 8218452 - BI PUBLISHER STANDALONE : GRAPHICS WITHOUT COLORS IF MORE THAN 33 PAGES 8250980 - USER WITH XMLP_ADMIN RESPONSIBILITY IS NOT ABLE TO EDIT REPORT IN BIP 8262410 - IMPOSSIBLE TO PRINT PDF CREATED BY BI PUBLISHER VIA 3RD PARTY PDF APPLICATION 8274369 - QA: CANNOT DELETE EMAIL SERVER UNDER DELIVERY CONFIGURATION 8284173 - FO:VISIBILITY="HIDDEN" DOESN'T WORK WITH FO:PAGE-NUMBER-CITATION 8288421 - THE VALUE OF VIEW BY GO BACK TO MY HISTORY IN SCHEDULES TAB 8299212 - REG: THE SPECIFICAL BI USER DIDN'T GET THE CORRECT REPORT HISTORY 8301767 - ORA-01795 ERROR OCCURED AFTER ACCESSING DASHBOARD PAGE WHICH INCLUDES BIP 8304944 - ADD SIEBEL SECURITY MODEL IN BI PUBLISHER 10.1.3.4.1 8312814 - QA:HOT:OBI SERVER JDBC DRIVER BIJDBC14.JAR IN XMLPSERVER.WAR IS INCORRECT 8323679 - BI PUBLISHER SENDS HTML REPORT TO OUTLOOK CLIENT AS ATTACHMENT NOT INLINE 8370794 - HISTORY OF COMPLETED SCHEDULER JOBS STILL SHOW ONE AS RUNNING ON CLUSTER ENV 8390970 - OUT OF MEMORY EXCEPTION RAISED, WHILE SAVING THE DATA 8393681 - CHECKBOX IS SHOWING UP AS CHECKED WHEN DATA IS NOT CHECKED VALUE 8725450 - UIX 2.3.6.6 UPTAKE FOR 10.1.3.4.1 UIX fixes: 6866363 - SUPPORT FOR JAVA DATE FORMAT AS PER JDK 1.4 AND ABOVE 6829124 - DATE PARAMETER NOT HANDLING DATE OFFSET AS PER JAVA STANDARDS ---------------------------- INSTALLATION FOR ENTERPRISE ---------------------------- Upgrade from 10.1.3.4.0d (patch 8284524, 8398280) and 10.1.3.4.1 does not require step 8 and step 9. 1 - Make a backup copy of the xmlp-server-config.xml file located in <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 2 - Back up all the directories under the BI Publisher repository (for example: {Oracle_Home}/xmlp/XMLP). 3 - If you are using Scheduling, back up your existing BI Publisher Scheduler schema. 4 - Shut down BI Publisher. 5 - Undeploy the BI Publisher application ("xmlpserver") from your J2EE application server. See your application server documentation for instructions how to undeploy an application. 6 - Deploy the 10.1.3.4 xmlpserver.ear or xmlpserver.war to your application server. See "Manually Installing BI Publisher to Your J2EE Application Server" secition of BI Publisher Installation Guide for guidelines for your application server type. 7 - Copy the saved backup copy of the xmlp-server-config.xml file from step 1 to the newly created BI Publisher <application installation>/WEB-INF/ directory, where your application server unpacked the BI Publisher war or ear file. Example: In an Oracle AS/OC4J 10.1.3 deployment, the location is <ORACLE_HOME>/j2ee/home/applications/xmlpserver/xmlpserver/WEB-INF/xmlp-server-config.xml 8 - Copy ssodefaults.xml to the following directory. And replace [host]:[port] with your server's information. Default values for other properties can be updated depending on your configuration. <Existing Repository>\XMLP\Admin\Security 9 - Copy database-config.xml to the following directory. <Existing Repository>\XMLP\Admin\Scheduler 10 - Restart xmlpserver application or Application Server ---------------------------------- IBM WEBSPHERE 6.1 DEPLOYMENT NOTE ---------------------------------- When users fail to log on to BI Publisher with "HTTP 500 Internal Server Error" on WebSphere 6.1, you must change Class Loader configuration to avoid the error. (bug7506253 - XMLPSERVER WON'T START AFTER DEPLOYMENT TO WEBSPHERE 6.1) SystemErr.log: java.lang.VerifyError: class loading constraint violated (class: oracle/xml/parser/v2/XMLNode method: xdkSetQxName(Loracle/xml/util/QxName;)V) at pc: 0 .... Class Loader Configuration Steps: 1 - Login to WebSphere Admin console. Click Enterprise Applications under Applications menu 2 - Click xmlpserver application name from the list 3 - Select "Class loading and update detection" 4 - Update class loader configuration as follows in Class Loader -> General Properties * Polling interval for updated files: [0] Seconds * Class loader order: [x] Classes loaded with application class loader first * WAR class loader policy: [x] Single class loader for application 5 - Apply this change and save the new configuration. 6 - Restart xmlpserver application Please refer to WebSphere 6.1 documentation for more details. "http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html"> http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/trun_classload_entapp.html ------------------------------------------------------- Oracle WebLogic Server 11g R1 (10.3.1) Deployment NOTE ------------------------------------------------------- If you are deploying BI Publisher to WebLogic Server 10.3.1, you must add the following setting at startup for the domain that contains the BI Publisher server in the /weblogic_home/user_projects/domains/base_domain/bin/startWebLogic.sh script : -Dtoplink.xml.platform=oracle.toplink.platform.xml.jaxp.JAXPPlatform This setting is required to enable BI Publisher to find the TopLink JAR files to create the Scheduler tables.

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • SQLAuthority News – Memories at Anniversary of SQL Wait Stats Book

    - by pinaldave
    SQL Wait Stats About a year ago, I had very proud moment. I had published my second book SQL Server Wait Stats with me as a primary author. It has been a long journey since then. The book got great response and it was widely accepted in the community. It was first of its kind of book written specifically on Wait Stats and Performance. The book was based on my earlier month long series written on the same subject SQL Server Wait Stats. Today, on the anniversary of the book, lots of things come to my mind let me share a few here. Idea behind Blog Series A very common question I often receive is why I wrote a 30 day series on Wait Stats. There were two reasons for it. 1) I have been working with SQL Server for a long time and have troubleshoot more than hundreds of SQL Server which are related to performance tuning. It was a great experience and it taught me a lot of new things. I always documented my experience. After a while I found that I was able to completely rely on my own notes when I was troubleshooting any servers. It is right then I decided to document my experience for the community. 2) While working with wait stats there were a few things, which I thought I knew it well as they were working. However, there was always a fear in the back of mind that what happens if what I believed was incorrect and I was on the wrong path all the time. There was only one way to get it validated. Put it out in front community with my understanding and request further help to improve my understanding. It worked, it worked beautifully. I received plenty of conversations, emails and comments. I refined my content based on various conversations and make it more relevant and near accurate. I guess above two are the major reasons for beginning my journey on writing Wait Stats blog series. Idea behind Book After writing a blog series there was a good amount of request I keep on receiving that I should convert it to eBook or proper book as reading blog posts is great but it goes not give a comprehensive understanding of the subject. The very common feedback from users who were beginning the subject that they will prefer to read it in a structured method. After hearing the feedback for more than 4 months, I decided to write a book based on the blog posts. When I envisioned book, I wanted to make sure this book addresses the wait stats concepts from the fundamentals and fill the gaps of blogs I wrote earlier. Rick Morelan and Joes 2 Pros Team I must acknowledge my co-author Rick Morelan for his unconditional support in writing this book. I had already authored one book before I published this book. The experience to write the book was out of the world. Writing blog posts are much much easier than writing books. The efforts it takes to write a book is 100 times more even though the content is ready. I could have not done it myself if there was not tremendous support of my co-author and editor’s team. We spend days and days researching and discussing various concepts covered in the book. When we were in doubt we reached out to experts as well did a practical reproduction of the scenarios to validate the concepts and claims. After continuous 3 months of hard work we were able to get this book out in the community. September 1st – the lucky day Well, we had to select any day to publish the books. When book was completed in August last week we felt very glad. We all had worked hard and having a sample draft book in hand was feeling like having a newborn baby in our hand. Every time my books are published I feel the same joy which I had when my daughter was born. The feeling of holding a new book in hand is the (almost) same feeling as holding newborn baby. I am excited. For me September 1st has been the luckiest day in mind life. My daughter Shaivi was born on September 1st. Since then every September first has been excellent day and have taken me to the next step in life. I believe anything and everything I do on September 1st it is turning out to be successful and blessed. Rick and I had finished a book in the last week of August. We sent it to the publisher (printer) and asked him to take the book live as soon as possible. We did not decide on any date as we wanted the book to get out as fast as it can. Interesting enough, the publisher/printer selected September 1st for publishing the book. He published the book on 1st September and I knew it at the same time that this book will go next level. Book Model – The Most Beautiful Girl We were done with book. We had no budget left for marketing. Rick and I had a long conversation regarding how to spread the words for the book so it can reach to many people. While we were talking about marketing Rick come up with the idea that we should hire a most beautiful girl around who acknowledge our book and genuinely care for book. It was a difficult task and Rick asked me to find a more beautiful girl. I am a father and the most beautiful girl for me my daughter. This was not a difficult task for me. Rick had given me task to find the most beautiful girl and I just could not think of anyone else than my own daughter. I still do not know what Rick thought about this idea but I had already made up my mind. You can see the detailed blog post here. The Fun Experiments Book Signing Event We had lots of fun moments along this book. We have given away more books to people for free than we have sold them actually. We had done book signing events, contests, and just plain give away when we found people can be benefited from this book. There was never an intention to make money and get rich. We just wanted that more and more people know about this new concept and learn from it. Today when I look back to the earnings there is nothing much we have earned if you talk about dollars. However the best reward which we have received is the satisfaction and love of community. The amount of emails, conversations we have so far received for this book is over thousands. We had fun writing this book, it was indeed a very satisfying journey. I have earned lots of friends while learning and exploring. Availability The book is one year old but still very relevant when it is about performance tuning. It is available at various online book stores. If you have read the book, do let me know what you think of it. Amazon | Kindle | Flipkart | Indiaplaza Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • Quick guide to Oracle IRM 11g: Creating your first sealed document

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThe previous articles in this guide have detailed how to install, configure and secure your Oracle IRM 11g service. This article walks you through the process of now creating your first context and securing a document against it. I should mention that it would be worth reviewing the following to ensure your installation is ready for that all important first document. Ensure you have correctly configured the keystore for the IRM wrapper keys. If this is not correctly configured, creating the context below will fail. Make sure the IRM server URL correctly resolves and uses the right protocol (HTTP or HTTPS) ContentsCreate the first contextInstall the Oracle IRM Desktop Seal your first document Create the first contextIn Oracle 11g there is a built in classification and rights system called the "standard rights model" which is based on 10 years of customer use cases and innovation. It is a system which enables IRM to scale massively whilst retaining the ability to balance security and usability and also separate duties by allowing contacts in the business to own classifications. The final article in this guide goes into detail on this inbuilt classification model, but for the purposes of this current article all we need to do is create at least one context to test our system out.With a new IRM server there are a set of predefined context templates and roles which again are setup in a way which reflects the most common use we've learned from our customers. We will use these out of the box configurations as they are to create the first context against which we will seal some content.First login to your Oracle IRM Management Website located at https://irm.company.com/irm_rights/. Currently the system is only configured to use the built in LDAP for users, so use the only account we have at the moment, which by default is weblogic. Once logged in switch to the Contexts tab. Click on the New Context icon () in the menu bar on the left. In the resulting dialog select the Standard context template and enter in a name for the context. Then just hit finish, the weblogic account will automatically be made the manager. You'll now see your brand new context ready for users to be assigned. Now click on the Assign Role icon () in the menu bar and in the resulting dialog search for your only user account, weblogic, and add to the list on the right. Now select a role for this user. Because we need to create a document with this user we must select contributor, as this is the only role which allows for the ability to seal. Finally hit next and then finish. We now have a context with a user that has the rights to create a document. The next step is to configure the IRM Desktop to get these rights from the server. Install the Oracle IRM Desktop Before we can seal a document we need the client software installed. Oracle IRM has a very small, lightweight client called the Oracle IRM Desktop which can be freely downloaded in 27 languages from here. Double click on the installer and click on next... Next again... And finally on install... Very easy. You may get a warning about closing Outlook, Word or another application and most of the time no reboots are required. Once it is installed you will see the IRM Desktop icon running in your tool tray, bottom right of the desktop. Seal your first document Finally the prize is within reach, creating your first sealed document. The server is running, we've got a context ready, a user assigned a role in the context but there is the simple and obvious hoop left to jump through. To seal a document we need to have the users rights cached to the local machine. For this to take place, the IRM Desktop needs to know where the Oracle IRM server is on the network so we can synchronize these rights and then be able to seal a document. The usual way for the IRM Desktop to know about the IRM server is it learns automatically when you open an existing piece of content that someone has sent you... ack. Bit of a chicken or the egg dilemma. The solution is to manually tell the IRM Desktop the location of the IRM Server and then force a synchronization of rights. Right click on the Oracle IRM Desktop icon in the system tray and select Options.... Then switch to the Servers tab in the resulting dialog. There are no servers in the list because you've never opened any content. This list is usually populated automatically but we are going to add a server manually, so click on New.... Into the dialog enter in the full URL to the IRM server. Note that this time you use the path /irm_desktop/ and not /irm_rights/. You can see an example from the image below. Click on the validate button and you'll be asked to authenticate. Enter in your weblogic username and password and also check the Remember my password check box. Click OK and the IRM Desktop will confirm a successful connection to the server. OK all the dialogs and we are ready to Synchronize this users rights to the desktop. Right click once more on the Oracle IRM Desktop icon in the system tray. Now the Synchronize menu option is available. Select this and the IRM Desktop will now talk to the IRM server, authenticate using your weblogic account and get your rights to the context we created. Because this is the first time this users has communicated with the IRM server the IRM Desktop presents a privacy policy dialog. This is a chance for the business to ask users to agree to any policy about the use of IRM before opening secured documents. In our guide we've not bothered to setup this URL so just click on the check box and hit Accept. The IRM Desktop will then talk to the server, get your rights and display a success dialog. Lets protect a documentNow we are ready to seal a piece of content. In my guide i'm going to protect a Microsoft Word document. This mean's I have to have copy of Office installed, in this guide i'm using Microsoft Office 2007. You could also seal a PDF document, you'll need to download and install Adobe Acrobat Reader. A very simple test could be to seal a GIF/JPG/PNG or piece of HTML because this is rendered using Internet Explorer. But as I say, i'm going to protect a Word document. The following example demonstrates choosing a file in Windows Explorer, there are many ways to seal a file and you can watch a few in this video.Open a copy of Windows Explorer and locate the file you wish to seal. Right click on the document and select Seal To -> Context You are now presented with the Select Context dialog. You'll now have a sealed copy of the document sat in the same location. Double click on this document and it will open, again using the credentials you've already provided. That is it, now you just need to add more users, more documents, more classifications and start exploring the different roles and experiment with different offline periods etc. You may wish to setup the server against an existing LDAP or Active Directory environment instead of using the built in WebLogic LDAP store. You can read how to use your corporate directory here. But before we finish this guide, there is one more article and arguably the most important article of all. Next I discuss the all important decision making surrounding the actually implementation of Oracle IRM inside your business. Who has rights to what? How do you map contexts to your existing business practices? It is the next article which actually ensures you deploy a successful IRM solution by looking at the business and understanding how they use your sensitive information and then configuring Oracle IRM to reflect their use.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >