Search Results

Search found 40429 results on 1618 pages for 'change password'.

Page 361/1618 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Form Not Submitting

    - by John
    Hello, When I try to click on the "submit" button for the form below, nothing happens. Any ideas why not? Thanks in advance, John submit.php: <?php require_once "header.php"; $u = $_SESSION['username']; if (!isLoggedIn()) { // user is not logged in. if (isset($_POST['cmdlogin'])) { // retrieve the username and password sent from login form & check the login. if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox2(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { show_loginform(); } } else { . show_userbox2(); } echo '<div class="submittitle">Submit an item.</div>'; echo '<form action="http://www...com/.../submit2.php" method="post"> <input type="hidden" value="'.$_SESSION['loginid'].'" name="uid"> <div class="submissiontitle"><label for="title">Story Title:</label></div> <div class="submissionfield"><input name="title" type="title" id="title" maxlength="1000"></div> <div class="urltitle"><label for="url">Link:</label></div> <div class="urlfield"><input name="url" type="url" id="url" maxlength="500"></div> <div class="submissionbutton"><input name="submit" type="submit" value="Submit"></div> </form> '; ?> submit2.php: <?php //if($_SERVER['REQUEST_METHOD'] == "POST"){header('Location: http://www...com/.../submit2.php');} require_once "header.php"; if (isLoggedIn() == true) { $remove_array = array('http://www.', 'http://', 'https://', 'https://www.', 'www.'); $cleanurl = str_replace($remove_array, "", $_POST['url']); $cleanurl = strtolower($cleanurl); $cleanurl = preg_replace('/\/$/','',$cleanurl); $title = $_POST['title']; //$url = $_POST['url']; $uid = $_POST['uid']; $title = mysql_real_escape_string($title); $cleanurl = mysql_real_escape_string($cleanurl); $site1 = 'http://' . $cleanurl; $displayurl = parse_url($site1, PHP_URL_HOST); function isURL($url1 = NULL) { if($url1==NULL) return false; $protocol = '(http://|https://)'; $allowed = '[-a-z0-9]{1,63}'; $regex = "^". $protocol . // must include the protocol '(' . $allowed . '\.)'. // 1 or several sub domains with a max of 63 chars '[a-z]' . '{2,6}'; // followed by a TLD if(eregi($regex, $url1)==true) return true; else return false; } if(isURL($site1)==true) mysql_query("INSERT INTO submission VALUES (NULL, '$uid', '$title', '$cleanurl', '$displayurl', NULL)"); else echo "<p class=\"topicu\">Not a valid URL.</p>\n"; } else { // user is not loggedin show_loginform(); } if (!isLoggedIn()) { // user is not logged in. if (isset($_POST['cmdlogin'])) { // retrieve the username and password sent from login form & check the login. if (checkLogin($_POST['username'], $_POST['password'])) { show_userbox(); } else { echo "Incorrect Login information !"; show_loginform(); } } else { // User is not logged in and has not pressed the login button // so we show him the loginform show_loginform(); } } else { // The user is already loggedin, so we show the userbox. show_userbox(); } require_once "footer.php"; ?>

    Read the article

  • Having a problem inserting a foreign key data into a table using a PHP form

    - by Gideon
    I am newbee with PHP and MySQL and need help... I have two tables (Staff and Position) in the database. The Staff table (StaffId, Name, PositionID (fk)). The Position table is populated with different positions (Manager, Supervisor, and so on). The two tables are linked with a PositionID foreign key in the Staff table. I have a staff registration form with textfields asking for the relevant attributes and a dynamically populated drop down list to choose the position. I need to insert the user's entry into the staff table along with the selected position. However, when inserting the data, I get the following error (Cannot add or update a child row: a foreign key constraint fails). How do I insert the position selected by the user into the staff table? Here is some of my code... ... echo "<tr>"; echo "<td>"; echo "*Position:"; echo "</td>"; echo "<td>"; //dynamically populate the staff position drop down list from the position table $position="SELECT PositionId, PositionName FROM Position ORDER BY PositionId"; $exeposition = mysql_query ($position) or die (mysql_error()); echo "<select name=position value=''>Select Position</option>"; while($positionarray=mysql_fetch_array($exeposition)) { echo "<option value=$positionarray[PositionId]>$positionarray[PositionName]</option>"; } echo "</select>"; echo "</td>"; echo "</tr>" //the form is processed with the code below $FirstName = $_POST['firstname']; $LastName = $_POST['lastname']; $Address = $_POST['address']; $City = $_POST['city']; $PostCode = $_POST['postcode']; $Country = $_POST['country']; $Email = $_POST['email']; $Password = $_POST['password']; $ConfirmPass = $_POST['confirmpass']; $Mobile = $_POST['mobile']; $NI = $_POST['nationalinsurance']; $PositionId = $_POST[$positionarray['PositionId']]; //format the dob for the database $dobpart = explode("/", $_POST['dob']); $formatdob = $dobpart[2]."-".$dobpart[1]."-".$dobpart[0]; $DOB = date("Y-m-d", strtotime($formatdob)); $newReg = "INSERT INTO Staff (FirstName, LastName, Address, City, PostCode, Country, Email, Password, Mobile, DOB, NI, PositionId) VALUES ('".$FirstName."', '".$LastName."', '".$Address."', '".$City."', '".$PostCode."', '".$Country."', '".$Email."', '".$Password."', ".$Mobile.", '".$DOB."', '".$NI."', '".$PostionId."')"; Your time and help is surely appreciated.

    Read the article

  • The type or namespace cannot be found (are you missing a using directive or an assembly reference?)

    - by Kumu
    I get the following error when I try to compile my C# program: The type or namespace name 'Login' could not be found (are you missing a using directive or an assembly reference?) using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace FootballLeague { public partial class MainMenu : Form { FootballLeagueDatabase footballLeagueDatabase; Game game; Team team; Login login; //Error here public MainMenu() { InitializeComponent(); changePanel(1); } public MainMenu(FootballLeagueDatabase footballLeagueDatabaseIn) { InitializeComponent(); footballLeagueDatabase = footballLeagueDatabaseIn; } private void Form_Loaded(object sender, EventArgs e) { } private void gameButton_Click(object sender, EventArgs e) { int option = 0; changePanel(option); } private void scoreboardButton_Click(object sender, EventArgs e) { int option = 1; changePanel(option); } private void changePanel(int optionIn) { gamePanel.Hide(); scoreboardPanel.Hide(); string title = "Football League System"; switch (optionIn) { case 0: gamePanel.Show(); this.Text = title + " - Game Menu"; break; case 1: scoreboardPanel.Show(); this.Text = title + " - Display Menu"; break; } } private void logoutButton_Click(object sender, EventArgs e) { login = new Login(); login.Show(); this.Hide(); } Login.cs class: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace FootballLeagueSystem { public partial class Login : Form { MainMenu menu; public Login() { InitializeComponent(); } private void administratorLoginButton_Click(object sender, EventArgs e) { string username1 = "08247739"; string password1 = "08247739"; if ((userNameTxt.Text.Length) == 0) MessageBox.Show("Please enter your username!"); else if ((passwordTxt.Text.Length) == 0) MessageBox.Show("Please enter your password!"); else if (userNameTxt.Text.Equals("") || passwordTxt.Text.Equals("")) MessageBox.Show("Invalid Username or Password!"); else { if (this.userNameTxt.Text == username1 && this.passwordTxt.Text == password1) MessageBox.Show("Welcome Administrator!", "Administrator Login"); menu = new MainMenu(); menu.Show(); this.Hide(); } } private void managerLoginButton_Click(object sender, EventArgs e) { { string username2 = "1111"; string password2 = "1111"; if ((userNameTxt.Text.Length) == 0) MessageBox.Show("Please enter your username!"); else if ((passwordTxt.Text.Length) == 0) MessageBox.Show("Please enter your password!"); else if (userNameTxt.Text.Equals("") && passwordTxt.Text.Equals("")) MessageBox.Show("Invalid Username or Password!"); else { if (this.userNameTxt.Text == username2 && this.passwordTxt.Text == password2) MessageBox.Show("Welcome Manager!", "Manager Login"); menu = new MainMenu(); menu.Show(); this.Hide(); } } } private void cancelButton_Click(object sender, EventArgs e) { this.Close(); } } } Where is the error? What am I doing wrong?

    Read the article

  • Can't mass-assign protected attributes -- unsolved issue

    - by nfriend21
    I have read about 10 different posts here about this problem, and I have tried every single one and the error will not go away. So here goes: I am trying to have a nested form on my users/new page, where it accepts user-attributes and also company-attributes. When you submit the form: Here's what my error message reads: ActiveModel::MassAssignmentSecurity::Error in UsersController#create Can't mass-assign protected attributes: companies app/controllers/users_controller.rb:12:in `create' Here's the code for my form: <%= form_for @user do |f| %> <%= render 'shared/error_messages', object: f.object %> <%= f.fields_for :companies do |c| %> <%= c.label :name, "Company Name"%> <%= c.text_field :name %> <% end %> <%= f.label :name %> <%= f.text_field :name %> <%= f.label :email %> <%= f.text_field :email %> <%= f.label :password %> <%= f.password_field :password %> <%= f.label :password_confirmation %> <%= f.password_field :password_confirmation %> <br> <% if current_page?(signup_path) %> <%= f.submit "Sign Up", class: "btn btn-large btn-primary" %> Or, <%= link_to "Login", login_path %> <% else %> <%= f.submit "Update User", class: "btn btn-large btn-primary" %> <% end %> <% end %> Users Controller: class UsersController < ApplicationController def index @user = User.all end def new @user = User.new end def create @user = User.create(params[:user]) if @user.save session[:user_id] = @user.id #once user account has been created, a session is not automatically created. This fixes that by setting their session id. This could be put into Controller action to clean up duplication. flash[:success] = "Your account has been created!" redirect_to tasks_path else render 'new' end end def show @user = User.find(params[:id]) @tasks = @user.tasks end def edit @user = User.find(params[:id]) end def update @user = User.find(params[:id]) if @user.update_attributes(params[:user]) flash[:success] = @user.name.possessive + " profile has been updated" redirect_to @user else render 'edit' end #if @task.update_attributes params[:task] #redirect_to users_path #flash[:success] = "User was successfully updated." #end end def destroy @user = User.find(params[:id]) unless current_user == @user @user.destroy flash[:success] = "The User has been deleted." end redirect_to users_path flash[:error] = "Error. You can't delete yourself!" end end Company Controller class CompaniesController < ApplicationController def index @companies = Company.all end def new @company = Company.new end def edit @company = Company.find(params[:id]) end def create @company = Company.create(params[:company]) #if @company.save #session[:user_id] = @user.id #once user account has been created, a session is not automatically created. This fixes that by setting their session id. This could be put into Controller action to clean up duplication. #flash[:success] = "Your account has been created!" #redirect_to tasks_path #else #render 'new' #end end def show @comnpany = Company.find(params[:id]) end end User model class User < ActiveRecord::Base has_secure_password attr_accessible :name, :email, :password, :password_confirmation has_many :tasks, dependent: :destroy belongs_to :company accepts_nested_attributes_for :company validates :name, presence: true, length: { maximum: 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i validates :email, presence: true, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } validates :password, length: { minimum: 6 } #below not needed anymore, due to has_secure_password #validates :password_confirmation, presence: true end Company Model class Company < ActiveRecord::Base attr_accessible :name has_and_belongs_to_many :users end Thanks for your help!!

    Read the article

  • How to make Net::Msmnr run?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • Inconsistent accessibility

    - by user1312412
    I am getting the following error Inconsistent accessibility: parameter type 'Db.Form1.ConnectionString' is less accessible than method 'Db.Form1.BuildConnectionString(Db.Form1.ConnectionString)' //Name spaces using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.VisualBasic; using System.Collections; using System.Diagnostics; using System.Data.OleDb; using System.IO; using System.Drawing.Printing; // namespace Db { public partial class Form1 : Form { public Form1() { InitializeComponent(); } public void SetBusy() { this.Cursor = Cursors.WaitCursor; Application.DoEvents(); } public void SetFree() { this.Cursor = Cursors.Default; Application.DoEvents(); } //connection string into parts struct ConnectionString { public string Provider; public string DataSource; public string UserId; public string Password; public string Database; } //Declare public string BuildConnectionString(ConnectionString connStr) ------> getting error here { string[] parts = new string[5]; parts[0] = "Provider=" + connStr.Provider; parts[1] = "Data Source=" + connStr.DataSource; parts[2] = "User Id=" + connStr.UserId; parts[3] = "Password=" + connStr.Password; parts[4] = "Initial Catalog=" + connStr.Database; return string.Join(";", parts); } // settings public bool IsValidConnectionForPrinting() { SetBusy(); ConnectionString connStr = new ConnectionString(); connStr.Provider = cboProvider.Text; connStr.DataSource = cboDataSource.Text; connStr.UserId = txtUserId.Text; connStr.Password = txtPassword.Text; connStr.Database = cboDatabase.Text; //connection string to database string connectionString = BuildConnectionString(connStr); OleDbConnection conn = new OleDbConnection(connectionString); try { conn.Open(); OleDbCommand cmd = conn.CreateCommand; cmd.CommandType = CommandType.TableDirect; cmd.CommandText = "vw_pr_DL"; cmd.ExecuteScalar(); cmd.CommandText = "vw_pr_VR"; cmd.ExecuteScalar(); //cmd.CommandText = "vw_pr_VR" //cmd.ExecuteScalar() conn.Close(); } //Exception messages catch (Exception ex) { SetFree(); if (ex.Message.StartsWith("Invalid object name")) { MessageBox.Show(ex.Message.Replace("Invalid object name", "Table or view not found"), "Connection Test"); } else { MessageBox.Show(ex.GetBaseException().Message, "Connection Test"); } return false; } SetFree(); return true; } // when user click testbutton private void btnConnTest_Click(object sender, EventArgs e) { if (IsValidConnectionForPrinting()) { MessageBox.Show("Connection succeeded", "Connection Test"); } }

    Read the article

  • Using LINQ to create a simple login

    - by JDWebs
    I'm an C# ASP.NET beginner, so please excuse anything that's... not quite right! In short, I want to create a really basic login system: one which runs through a database and uses sessions so that only logged in users can access certain pages. I know how to do most of that, but I'm stuck with querying data with LINQ on the login page. On the login page, I have a DropDownList to select a username, a Textbox to type in a password and a button to login (I also have a literal for errors). The DropDownList is databound to a datatable called DT_Test. DT_Test contains three columns: UsernameID (int), Username (nchar(30)) and Password (nchar(30)). UsernameID is the primary key. I want to make the button's click event query data from the database with the DropDownList and Textbox, in order to check if the username and password match. But I don't know how to do this... Current Code (not a lot!): Front End: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Login_Test.aspx.cs" Inherits="Login_Login_Test" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Login Test</title> </head> <body> <form id="LoginTest" runat="server"> <div> <asp:DropDownList ID="DDL_Username" runat="server" Height="20px" DataTextField="txt"> </asp:DropDownList> <br /> <asp:TextBox ID="TB_Password" runat="server" TextMode="Password"></asp:TextBox> <br /> <asp:Button ID="B_Login" runat="server" onclick="B_Login_Click" Text="Login" /> <br /> <asp:Literal ID="LI_Result" runat="server"></asp:Literal> </div> </form> </body> </html> Back End: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Login_Login_Test : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { Binder(); } } private void Binder() { using (DataClassesDataContext db = new DataClassesDataContext()) { DDL_Username.DataSource = from x in db.DT_Honeys select new { x.UsernameID, txt = x.Username }; DDL_Username.DataBind(); } } protected void B_Login_Click(object sender, EventArgs e) { if (TB_Password.Text != "") { using (DataClassesDataContext db = new DataClassesDataContext()) { } } } } I have spent hours searching and trying different code, but none of it seems to fit in for this context. Anyway, help and tips appreciated, thank you very much! I am aware of security risks etc. but this is not a live website or anything, it is simply for testing purposes as a beginner. *

    Read the article

  • Why does Perl's Net::Msmgr hang when I try to authenticate?

    - by codeholic
    There's Net::Msmgr module on CPAN. It's written clean and the code looks trustworthy at the first glance. However this module seems to be beta and there is little documentation and no tests :-/ Has anyone used this module in production? I haven't managed to make it run by now, because it requires all event loop processing to be done in the application and as I've already said there is little documentation and no working examples to study. That's where I've gone so far: #!/usr/bin/perl use strict; use warnings; use Event; use Net::Msmgr::Object; use Net::Msmgr::Session; use Net::Msmgr::User; use constant DEBUG => 511; use constant EVENT_TIMEOUT => 5; # seconds my ($username, $password) = qw/[email protected] my.password/; my $buddy = '[email protected]'; my $user = Net::Msmgr::User->new(user => $username, password => $password); my $session = Net::Msmgr::Session->new; $session->debug(DEBUG); $session->login_handler(\&login_handler); $session->user($user); my $conv; sub login_handler { my $self = shift; print "LOGIN\n"; $self->ui_state_nln; $conv = $session->ui_new_conversation; $conv->invite($buddy); } our %watcher; sub ConnectHandler { my ($connection) = @_; warn "CONNECT\n"; my $socket = $connection->socket; $watcher{$connection} = Event->io(fd => $socket, cb => [ $connection, '_recv_message' ], poll => 're', desc => 'recv_watcher', repeat => 1); } sub DisconnectHandler { my $connection = shift; print "DISCONNECT\n"; $watcher{$connection}->cancel; } $session->connect_handler(\&ConnectHandler); $session->disconnect_handler(\&DisconnectHandler); $session->Login; Event::loop(); That's what it outputs: Dispatch Server connecting to: messenger.hotmail.com:1863 Dispatch Server connected CONNECT Dispatch Server >>>VER 1 MSNP2 CVR0 --> VER 1 MSNP2 CVR0 Dispatch Server >>>USR 2 MD5 I [email protected] --> USR 2 MD5 I [email protected] Dispatch Server <<<VER 1 CVR0 <-- VER 1 CVR0 And that's all, here it hangs. The handler on login is not being triggered. What am I doing wrong?

    Read the article

  • How to prevent PHP variables from being arrays or objects?

    - by MJB
    I think that (the title) is the problem I am having. I set up a MySQL connection, I read an XML file, and then I insert those values into a table by looping through the elements. The problem is, instead of inserting only 1 record, sometimes I insert 2 or 3 or 4. It seems to depend on the previous values I have read. I think I am reinitializing the variables, but I guess I am missing something -- hopefully something simple. Here is my code. I originally had about 20 columns, but I shortened the included version to make it easier to read. $ctr = 0; $sql = "insert into csd (id,type,nickname,hostname,username,password) ". "values (?,?,?,?,?,?)"; $cur = $db->prepare($sql); for ($ctr = 0; $ctr < $expected_count; $ctr++) { unset($bind_vars,$dat); $lbl = "csd_{$ctr}"; $dat['type'] = (string) $ref->itm->csds->$lbl->type; $dat['nickname'] = (string) $ref->itm->csds->$lbl->nickname; $dat['hostname'] = (string) $ref->itm->csds->$lbl->hostname; $dat['username'] = (string) $ref->itm->csds->$lbl->username; $dat['password'] = (string) $ref->itm->csds->$lbl->password; $bind_vars = array( $id,$dat['$type'], $dat['$nickname'], $dat['$hostname'], $dat['$username'], $dat['$password']); print_r ($bind_vars); $res = $db->execute($cur, $bind_vars); } P.S. I also tagged this SimpleXML because that is how I am reading the file, though that code is not included above. It looks like this: $ref = simplexml_load_file($file); UPDATE: I've changed the code around as per suggestions, and now it is not always the same pattern, but it is equally broken. When I display the bind array before inserting, it looks like this. Note that I also count the rows before and after, so there are 0 rows, then I insert 1, then there are 2: 0 CSDs on that ITEM now. Array ( [0] => 2 [1] => 0 [2] => [3] => X [4] => XYZ [5] => [6] => [7] => [8] => audio [9] => [10] => 192.168.0.50 [11] => 192.168.0.3 [12] => 255.255.255.0 [13] => 255.255.255.0 [14] => [15] => [16] => [17] => 21 [18] => 5 [19] => Y [20] => /dir ) 2 CSDs on that ITEM now.

    Read the article

  • Rails: (Devise) Two different methods for new users?

    - by neezer
    I have a Rails 3 app with authentication setup using Devise with the registerable module enabled. I want to have new users who sign up using our outside register form to use the full Devise registerable module, which is happening now. However, I also want the admin user to be able to create new users directly, bypassing (I think) Devise's registerable module. With registerable disabled, my standard UsersController works as I want it to for the admin user, just like any other Rail scaffold. However, now new users can't register on their own. With registerable enabled, my standard UsersController is never called for the new user action (calling Devise::RegistrationsController instead), and my CRUD actions don't seem to work at all (I get dumped back onto my root page with no new user created and no flash message). Here's the log from the request: Started POST "/users" for 127.0.0.1 at 2010-12-20 11:49:31 -0500 Processing by Devise::RegistrationsController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"18697r4syNNWHfMTkDCwcDYphjos+68rPFsaYKVjo8Y=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "role"=>"manager"}, "commit"=>"Create User"} SQL (0.9ms) ... User Load (0.6ms) SELECT "users".* FROM "users" WHERE ("users"."id" = 2) LIMIT 1 SQL (0.9ms) ... Redirected to http://test-app.local/ Completed 302 Found in 192ms ... but I am able to register new users through the outside form. How can I get both of these methods to work together, such that my admin user can manually create new users and guest users can register on their own? I have my Users controller setup for standard CRUD: class UsersController < ApplicationController load_and_authorize_resource def index @users = User.where("id NOT IN (?)", current_user.id) # don't display the current user in the users list; go to account management to edit current user details end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:notice] = "#{ @user.email } created." redirect_to users_path else render :action => 'new' end end def edit end def update params[:user].delete(:password) if params[:user][:password].blank? params[:user].delete(:password_confirmation) if params[:user][:password].blank? and params[:user][:password_confirmation].blank? if @user.update_attributes(params[:user]) flash[:notice] = "Successfully updated User." redirect_to users_path else render :action => 'edit' end end def delete end def destroy redirect_to users_path and return if params[:cancel] if @user.destroy flash[:notice] = "#{ @user.email } deleted." redirect_to users_path end end end And my routes setup as follows: TestApp::Application.routes.draw do devise_for :users devise_scope :user do get "/login", :to => "devise/sessions#new", :as => :new_user_session get "/logout", :to => "devise/sessions#destroy", :as => :destroy_user_session end resources :users do get :delete, :on => :member end authenticate :user do root :to => "application#index" end root :to => "devise/session#new" end

    Read the article

  • Rogue PropertyChanged notifications from ViewModel

    - by user1886323
    The following simple program is causing me a Databinding headache. I'm new to this which is why I suspect it has a simple answer. Basically, I have two text boxes bound to the same property myString. I have not set up the ViewModel (simply a class with one property, myString) to provide any notifications to the View for when myString is changed, so even although both text boxes operate a two way binding there should be no way that the text boxes update when myString is changed, am I right? Except... In most circumstances this is true - I use the 'change value' button at the bottom of the window to change the value of myString to whatever the user types into the adjacent text box, and the two text boxes at the top, even although they are bound to myString, do not change. Fine. However, if I edit the text in TextBox1, thus changing the value of myString (although only when the text box loses focus due to the default UpdateSourceTrigger property, see reference), TextBox2 should NOT update as it shouldn't receive any updates that myString has changed. However, as soon as TextBox1 loses focus (say click inside TextBox2) TextBox2 is updated with the new value of myString. My best guess so far is that because the TextBoxes are bound to the same property, something to do with TextBox1 updating myString gives TextBox2 a notification that it has changed. Very confusing as I haven't used INotifyPropertyChanged or anything like that. To clarify, I am not asking how to fix this. I know I could just change the binding mode to a oneway option. I am wondering if anyone can come up with an explanation for this strange behaviour? ViewModel: namespace WpfApplication1 { class ViewModel { public ViewModel() { _myString = "initial message"; } private string _myString; public string myString { get { return _myString; } set { if (_myString != value) { _myString = value; } } } } } View: <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication1" Title="MainWindow" Height="350" Width="525"> <Window.DataContext> <local:ViewModel /> </Window.DataContext> <Grid> <!-- The culprit text boxes --> <TextBox Height="23" HorizontalAlignment="Left" Margin="166,70,0,0" Name="textBox1" VerticalAlignment="Top" Width="120" Text="{Binding Path=myString, Mode=TwoWay}" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="166,120,0,0" Name="textBox2" VerticalAlignment="Top" Width="120" Text="{Binding Path=myString, Mode=TwoWay}"/> <!--The buttons allowing manual change of myString--> <Button Name="changevaluebutton" Content="change value" Click="ButtonUpdateArtist_Click" Margin="12,245,416,43" Width="75" /> <Button Content="Show value" Height="23" HorizontalAlignment="Left" Margin="12,216,0,0" Name="showvaluebutton" VerticalAlignment="Top" Width="75" Click="showvaluebutton_Click" /> <Label Content="" Height="23" HorizontalAlignment="Left" Margin="116,216,0,0" Name="showvaluebox" VerticalAlignment="Top" Width="128" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="116,245,0,0" Name="changevaluebox" VerticalAlignment="Top" Width="128" /> <!--simply some text--> <Label Content="TexBox1" Height="23" HorizontalAlignment="Left" Margin="99,70,0,0" Name="label1" VerticalAlignment="Top" Width="61" /> <Label Content="TexBox2" Height="23" HorizontalAlignment="Left" Margin="99,118,0,0" Name="label2" VerticalAlignment="Top" Width="61" /> </Grid> </Window> Code behind for view: namespace WpfApplication1 { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { ViewModel viewModel; public MainWindow() { InitializeComponent(); viewModel = (ViewModel)this.DataContext; } private void showvaluebutton_Click(object sender, RoutedEventArgs e) { showvaluebox.Content = viewModel.myString; } private void ButtonUpdateArtist_Click(object sender, RoutedEventArgs e) { viewModel.myString = changevaluebox.Text; } } }

    Read the article

  • Form submission info showing up in URL and not working

    - by kcurtin
    I am making a Rails 3.1 app and have a signup form that was working fine, but I seemed to have changed something to break it.. I'm using Twitter bootstrap and twitter_bootstrap_form_for gem. I made some change that messed with the formatting of the form fields, but more importantly, when I submit the Sign Up form to create a new User, the information is showing up in the URL and looks like this: EDIT: This is happening in the latest versions of Chrome and Firefox http://localhost:3000/?utf8=%E2%9C%93&authenticity_token=UaKG5Y8fuPul2Klx7e2LtdPLTRepBxDM3Zdy8S%2F52W4%3D&user%5Bemail%5D=kevinc%40example.com&user%5Bpassword%5D=testing&user%5Bpassword_confirmation%5D=testing&commit=Sign+Up Here is the code for the form: <div class="span7"> <h3 class="center" id="more">Sign Up Now!</h3> <%= twitter_bootstrap_form_for @user do |user| %> <%= user.email_field :email, :placeholder => '[email protected]' %> <%= user.password_field :password %> <%= user.password_field :password_confirmation, 'Confirm Password' %> <%= user.actions do %> <%= user.submit 'Sign Up' %> <% end %> <% end %> </div> Here is the code for the UsersController: class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(params[:user]) if @user.save redirect_to about_path, :notice => "Signed up!" else render 'new' end end end Not sure if there is more you need but if so let me know! Thank you! Edit: For debugging I tried specifying :post and also using a plain form_for <%= form_for(@user, :method => :post) do |f| %> <div class="field"> <%= f.label :email %> <%= f.email_field :email %> </div> <div class="field"> <%= f.label :password %> <%= f.password_field :password %> </div> <div class="field"> <%= f.label :password_confirmation %> <%= f.password_field :password_confirmation %> </div> <div class="actions"><%= f.submit "Sign Up" %></div> <% end %> This gives me the same problem as above. Adding routes.rb: Auth31::Application.routes.draw do get "home" => "pages#home" get "about" => "pages#about" get "contact" => "pages#contact" get "help" => "pages#help" get "login" => "sessions#new", :as => "login" get "logout" => "sessions#destroy", :as => "logout" get "signup" => "users#new", :as => "signup" root :to => "pages#home" resources :pages resources :users resources :sessions resources :password_resets end

    Read the article

  • Implementation of ZipCrypto / Zip 2.0 encryption in java

    - by gomesla
    I'm trying o implement the zipcrypto / zip 2.0 encryption algoritm to deal with encrypted zip files as discussed in http://www.pkware.com/documents/casestudies/APPNOTE.TXT I believe I've followed the specs but just can't seem to get it working. I'm fairly sure the issue has to do with my interpretation of the crc algorithm. The documentation states CRC-32: (4 bytes) The CRC-32 algorithm was generously contributed by David Schwaderer and can be found in his excellent book "C Programmers Guide to NetBIOS" published by Howard W. Sams & Co. Inc. The 'magic number' for the CRC is 0xdebb20e3. The proper CRC pre and post conditioning is used, meaning that the CRC register is pre-conditioned with all ones (a starting value of 0xffffffff) and the value is post-conditioned by taking the one's complement of the CRC residual. Here is the snippet that I'm using for the crc32 public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } Below is my complete code. Can anyone see what I'm doing wrong. package org.apache.commons.compress.archivers.zip; import java.io.IOException; import java.io.InputStream; public class ZipCryptoInputStream extends InputStream { public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } private static final long ENCRYPTION_KEY_1 = 0x12345678; private static final long ENCRYPTION_KEY_2 = 0x23456789; private static final long ENCRYPTION_KEY_3 = 0x34567890; private InputStream baseInputStream = null; private final PKZIPCRC32 checksumEngine = new PKZIPCRC32(); private long[] keys = null; public ZipCryptoInputStream(ZipArchiveEntry zipEntry, InputStream inputStream, String passwd) throws Exception { baseInputStream = inputStream; // Decryption // ---------- // PKZIP encrypts the compressed data stream. Encrypted files must // be decrypted before they can be extracted. // // Each encrypted file has an extra 12 bytes stored at the start of // the data area defining the encryption header for that file. The // encryption header is originally set to random values, and then // itself encrypted, using three, 32-bit keys. The key values are // initialized using the supplied encryption password. After each byte // is encrypted, the keys are then updated using pseudo-random number // generation techniques in combination with the same CRC-32 algorithm // used in PKZIP and described elsewhere in this document. // // The following is the basic steps required to decrypt a file: // // 1) Initialize the three 32-bit keys with the password. // 2) Read and decrypt the 12-byte encryption header, further // initializing the encryption keys. // 3) Read and decrypt the compressed data stream using the // encryption keys. // Step 1 - Initializing the encryption keys // ----------------------------------------- // // Key(0) <- 305419896 // Key(1) <- 591751049 // Key(2) <- 878082192 // // loop for i <- 0 to length(password)-1 // update_keys(password(i)) // end loop // // Where update_keys() is defined as: // // update_keys(char): // Key(0) <- crc32(key(0),char) // Key(1) <- Key(1) + (Key(0) & 000000ffH) // Key(1) <- Key(1) * 134775813 + 1 // Key(2) <- crc32(key(2),key(1) >> 24) // end update_keys // // Where crc32(old_crc,char) is a routine that given a CRC value and a // character, returns an updated CRC value after applying the CRC-32 // algorithm described elsewhere in this document. keys = new long[] { ENCRYPTION_KEY_1, ENCRYPTION_KEY_2, ENCRYPTION_KEY_3 }; for (int i = 0; i < passwd.length(); ++i) { update_keys((byte) passwd.charAt(i)); } // Step 2 - Decrypting the encryption header // ----------------------------------------- // // The purpose of this step is to further initialize the encryption // keys, based on random data, to render a plaintext attack on the // data ineffective. // // Read the 12-byte encryption header into Buffer, in locations // Buffer(0) thru Buffer(11). // // loop for i <- 0 to 11 // C <- buffer(i) ^ decrypt_byte() // update_keys(C) // buffer(i) <- C // end loop // // Where decrypt_byte() is defined as: // // unsigned char decrypt_byte() // local unsigned short temp // temp <- Key(2) | 2 // decrypt_byte <- (temp * (temp ^ 1)) >> 8 // end decrypt_byte // // After the header is decrypted, the last 1 or 2 bytes in Buffer // should be the high-order word/byte of the CRC for the file being // decrypted, stored in Intel low-byte/high-byte order. Versions of // PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is // used on versions after 2.0. This can be used to test if the password // supplied is correct or not. byte[] encryptionHeader = new byte[12]; baseInputStream.read(encryptionHeader); for (int i = 0; i < encryptionHeader.length; i++) { encryptionHeader[i] ^= decrypt_byte(); update_keys(encryptionHeader[i]); } } protected byte decrypt_byte() { byte temp = (byte) (keys[2] | 2); return (byte) ((temp * (temp ^ 1)) >> 8); } @Override public int read() throws IOException { // // Step 3 - Decrypting the compressed data stream // ---------------------------------------------- // // The compressed data stream can be decrypted as follows: // // loop until done // read a character into C // Temp <- C ^ decrypt_byte() // update_keys(temp) // output Temp // end loop int read = baseInputStream.read(); read ^= decrypt_byte(); update_keys((byte) read); return read; } private final void update_keys(byte ch) { keys[0] = checksumEngine.crc32((int) keys[0], ch); keys[1] = keys[1] + (byte) keys[0]; keys[1] = keys[1] * 134775813 + 1; keys[2] = checksumEngine.crc32((int) keys[2], (byte) (keys[1] >> 24)); } }

    Read the article

  • Automatically extracting inline XSD from WSDL into XSD file(s)

    - by Steven Geens
    I am using a third party Web Service whose definition and implementation are beyond my control. This web service will change in the future. The Web Service should be used to generate an XML file which contains some of the same data (represented by the same XSD types) as the Web Service plus some extra information generated by the program. My approach: create my own XSD referring to the XSD definitions of the WSDL of the called web service (This XSD also includes XSD types for the extra information obviously.) use a Java XML databinding framework (like ADB or JiXB) to generate the databinding classes from my own XSD file from step 1 use a Java SOAP framework (like Axis2 or CXF) with the same databinding framework to generate the databinding classes from the WSDL (This would enable me to use the objects retrieved by the web service directly in the generation of the XML.) The XSD types I am going to use in my own XSD file, but are defined in the WSDL, are subject to change. Whenever they change, I would like to automatically process the XSD and WSDL databinding again. (If the change is significant enough, this might trigger some development effort.(But usually not.)) My problem: In step 1 I need an XSD referring to the same types as used by the Web Service. The WSDL is referring to another WSDL, which is referring to another WSDL etc. Eventually there is an WSDL with the needed inline XSD types. As far as I know there is no way to directly reference the inline XSD types of a WSDL from an XSD. The approach I would think most viable, is to include an extra step in the automatic processing (before the databinding) that extracts the inline XSD from the WSDL into other XSD file(s). These other XSD file(s) can then be referred to by my own XSD file. Things I'd like to avoid: Manually copy pasting the inline XSD into an XSD file (I am looking for an automatic process.) Any manual steps.(Like the determining the WSDL that contains the inline types manually.(The location of that WSDL does change as well.)) Using xsd:any in my own XSD. I would like my own XSD file to be correct. Using a non-Java technology(like .NET) Huge amounts of implementation (but hints on how you would implement such an extraction are welcome anyway) PS: I found some similar questions, but they all had responses like: WTH would you want to do that? That is the reason for my rather large background story.

    Read the article

  • Question About Example In Robert C Martin's _Clean Code_

    - by Jonah
    This is a question about the concept of a function doing only one thing. It won't make sense without some relevant passages for context, so I'll quote them here. They appear on pgs 37-38: To say this differently, we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down. To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns. To include the setups, we include the suite setup if this is a suite, then we include the regular setup. It turns out to be very dif?cult for programmers to learn to follow this rule and write functions that stay at a single level of abstraction. But learning this trick is also very important. It is the key to keeping functions short and making sure they do “one thing.” Making the code read like a top-down set of TO paragraphs is an effective technique for keeping the abstraction level consistent. He then gives the following example of poor code: public Money calculatePay(Employee e) throws InvalidEmployeeType { switch (e.type) { case COMMISSIONED: return calculateCommissionedPay(e); case HOURLY: return calculateHourlyPay(e); case SALARIED: return calculateSalariedPay(e); default: throw new InvalidEmployeeType(e.type); } } and explains the problems with it as follows: There are several problems with this function. First, it’s large, and when new employee types are added, it will grow. Second, it very clearly does more than one thing. Third, it violates the Single Responsibility Principle7 (SRP) because there is more than one reason for it to change. Fourth, it violates the Open Closed Principle8 (OCP) because it must change whenever new types are added. Now my questions. To begin, it's clear to me how it violates the OCP, and it's clear to me that this alone makes it poor design. However, I am trying to understand each principle, and it's not clear to me how SRP applies. Specifically, the only reason I can imagine for this method to change is the addition of new employee types. There is only one "axis of change." If details of the calculation needed to change, this would only affect the submethods like "calculateHourlyPay()" Also, while in one sense it is obviously doing 3 things, those three things are all at the same level of abstraction, and can all be put into a TO paragraph no different from the example one: TO calculate pay for an employee, we calculate commissioned pay if the employee is commissioned, hourly pay if he is hourly, etc. So aside from its violation of the OCP, this code seems to conform to Martin's other requirements of clean code, even though he's arguing it does not. Can someone please explain what I am missing? Thanks.

    Read the article

  • iPhone Key-Value Observer: observer not registering in UITableViewController

    - by Scott
    Hi Fellow iPhone Developers, I am an experienced software engineer but new to the iPhone platform. I have successfully implemented sub-classed view controllers and can push and pop parent/child views on the view controller stack. However, I have struck trouble while trying to update a view controller when an object is edited in a child view controller. After much failed experimentation, I discovered the key-value observer API which looked like the perfect way to do this. I then registered an observer in my main/parent view controller, and in the observer I intend to reload the view. The idea is that when the object is edited in the child view controller, this will be fired. However, I think that the observer is not being registered, because I know that the value is being updated in the editing view controller (I can see it in the debugger), but the observing method is never being called. Please help! Code snippets follow below. Object being observed. I believe that this is key-value compliant as the value is set when called with the setvalue message (see Child View Controller below). X.h: @interface X : NSObject <NSCoding> { NSString *name; ... @property (nonatomic, retain) NSString *name; X.m: @implementation X @synthesize name; ... Main View Controller.h: @class X; @interface XViewController : UITableViewController { X *x; ... Main View Controller.m: @implementation XViewController @synthesize x; ... - (void)viewDidLoad { ... [self.x addObserver:self forKeyPath: @"name" options: (NSKeyValueObservingOptionNew | NSKeyValueObservingOptionOld) context:nil]; [super viewDidLoad]; } ... - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if ([keyPath isEqual:@"name"]) { NSLog(@"Found change to X"); [self.tableView reloadData]; } [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; } Child View Controller.m: (this correctly sets the value in the object in the child view controller) [self.x setValue:[[tempValues objectForKey:key] text] forKey:@"name"];

    Read the article

  • jquery addresses and live method

    - by Jay
    //deep linking $.fn.ajaxAnim = function() { $(this).animW(); $(this).html('<div class="load-prog">loading...</div>'); } $("document").ready(function(){ contM = $('#main-content'); contS = $('#second-content'); $(contM).hide(); $(contM).addClass('hidden'); $(contS).hide(); $(contS).addClass('hidden'); function loadURL(URL) { //console.log("loadURL: " + URL); $.ajax({ url: URL, beforeSend: function(){$(contM).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $('.post-content').initializeScroll(); } }); } // Event handlers $.address.init(function(event) { //console.log("init: " + $('[rel=address:' + event.value + ']').attr('href')); }).change(function(event) { evVal = event.value; if(evVal == '/'){return false;} else{ $.ajax({ url: $('[rel=address:' + evVal + ']').attr('href'), beforeSend: function(){$(contM).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $('.post-content').initializeScroll(); }}); } //console.log("change"); }) $('.update-main a, a.update-main').live('click', function(){ loadURL($(this).attr('href')); return false; }); $(".update-second a, a.update-second").live('click', function() { var link = $(this); $.ajax({ url: link.attr("href"), beforeSend: function(){$(contS).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contS).html(data); $('.post-content').initializeScroll(); }}); return false; }); }); I'm using jquery addresses to update content while maintaining a useful url. When clicking on links in a main nav, the url is updated properly, but when links are loaded dynamically with ajax, the url address function breaks. I have made 'click' events live, allowing for content to be loaded via dynamically loaded links, but I can't seem to make the address event listener live, but this seems to be the only way to make this work. Is my syntax wrong if I change this : $.address.change(function(event) { to this: $.address.live('change', function(event) { or does the live method not work with this plugin?

    Read the article

  • iPhone OS: KVO: Why is my Observer only getting notified at applicationDidfinishLaunching

    - by nickthedude
    I am basically trying to implement an achievement tracking setup in my app. I have a managedObjectModel class called StatTracker to keep track of all sorts of stats and I want my Achievement tracking class to be notified when those stats change so I can check them against a value and see if the user has earned an achievement. I've tried to impliment KVO and I think I'm pretty close to making it happen but the problem I'm running into is this: So in the appDelegate i have an Ivar for my Achievement tracker class, I attach it as an observer to a property value of my statTracker core data entity in the applicationDidFinishLaunching method. I know its making the connection because I've been able to trigger a UIAlert in my AchievementTracker instance, and I've put several log statements that should be triggered whenever the value on the StatTracker's property changes. the log statement appears only once at the application launch. I'm wondering if I'm missing something in the whole object lifecycle scheme of things, I just don't understand why the observer stops getting notified of changes after the applicationDidFinishLaunching method has run. Does it have something to do with the scope of the AchievementTracker reference or more likely the reference to my core data StatTracker is going away once that method finishes up. I guess I'm not sure the right place to place these if that is the case. Would love some help. Here is the code where I add the observer in my appDidFinishLaunching method: [[CoreDataSingleton sharedCoreDataSingleton] incrementStatTrackerStat:@"timesLaunched"]; achievementsObserver = [[AchievementTracker alloc] init]; StatTracker *object = nil; object = [[[CoreDataSingleton sharedCoreDataSingleton] getStatTracker] objectAtIndex:0]; NSLog(@"%@",[object description]); [[CoreDataSingleton sharedCoreDataSingleton] addObserver:achievementsObserver toStat:@"refreshCount"]; here is the code in my core data singleton: -(void) addObserver:(id)observer toStat:(NSString *) statToObserve { NSLog(@"observer added"); NSArray *array = [[NSArray alloc] init]; array = [self getStatTracker]; [[array objectAtIndex:0] addObserver:observer forKeyPath:statToObserve options:NSKeyValueObservingOptionNew | NSKeyValueObservingOptionOld context:NULL]; } and my AchievementTracker: - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { NSLog(@"achievemnt hit"); //NSLog("%@", [change description]); if ([keyPath isEqual:@"refreshCount"] && ((NSInteger)[change valueForKey:@"NSKeyValueObservingOptionOld"] == 60) ) { NSLog(@"achievemnt hit inside"); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"title" message:@"achievement unlocked" delegate:self cancelButtonTitle:@"cancel" otherButtonTitles:nil]; [alert show]; } }

    Read the article

  • Changing the itemsSource of a treeview makes it's children invisible, when they were already display

    - by Marnix Kraus
    I found some strange problem in WPF, using the itemsSource of a treeview. I hope I can make this specific problem clear for you. First; a story. There is a treeview. It has a list with treeviewitems as itemsSource. This list is called _roots. There is another list, called _leafs. For as in a treeview, the _roots contain the _leafs in some hierarchical way. For example: <TreeviewItem Header="Jungle"> <TreeviewItem> <SpecialTreeviewItem Header="Monkey"/> <SpecialTreeviewItem Header="Apple"/> </TreeviewItem> </TreeviewItem> Now I am trying to switch between these two lists as itemsSource. It seemed to work fine, but it doesn't: When the Jungle-item is un-expanded, and I change the itemsSource to _leafs, and change it back again to _roots, everything works fine and all items can be expanded and showed. But when the Jungle-item is expanded (and the special items are already visible) and I change it to the _leafs itemsSource, and then change the itemsSource back to _roots, all special items have disappeared!! Also, when I do the same as case 2, but first un-expand the Jungle-item again, the special items also disappear. I did a lot of debugging, before posting this question here and come to the following conclusion: Printing on the event: visibility changed, the visibility is set to false for all items that were already visible (that is, when _roots become visible, the special items become invisible (because they were already visible)) So, IsVisible is false for the items, but Visibility = Visible. Which is a bit strange. The problem seems to depend on the use of the _roots list, which in a certain way contain the _leafs. When I change the itemsSource to different lists with special items in it, everything works fine. The hierarchical structure of the _roots make this thing broken. I hope that this is a complete overview of my problem. Help would be appreciated.

    Read the article

  • Changing the title of jQuery-UI dialog-box with in another dialog-box's function...

    - by Brian Ojeda
    Why doesn't doesn't the second jQuery-UI dialog box title change when popped. The first dialog box I change the title of the box with using the following .attr("title", "Confirm") -- it change the title of the first box to 'Confirm', like it should have. Now when the second box pops up it should change the title to 'Message' since did the same thing for the second box -- .attr("title", "Message"). Right? But it doesnt. It keep the title from before. However, the message change like it should have. I have tested in IE8, Chrome, and FF3.6. <div id="dialog-confirm" title=""></div> <-- This is the html before jQuery functions. Javascript / jQuery $('#userDelete').click(function() { $(function() { var dialogIcon = "<span class=\"ui-icon ui-icon-alert\"></span>"; var dialogMessage = dialogIcon + "Are you sure you want to delete?"; $("#dialog-confirm").attr("title", "Confirm").html(dialogMessage).dialog({ resizable: false, height: 125, width: 300, modal: true, buttons: { 'Delete': function() { $(this).dialog('close'); $.post('user_ajax.php', {action: 'delete', aId: $('[name=aId]').val() }, function(data) { if(data.success){ var dialogIcon = "<span class=\"ui-icon ui-icon-info\"></span>"; var dialogMessage = dialogIcon + data.message; $('#dialog-confirm').attr("title", "Message"); $('#dialog-confirm').html(dialogMessage); $('#dialog-confirm').dialog({ resizable: false, height: 125, width: 300, modal: true, buttons: { 'Okay': function() { $(this).dialog('close'); var url = $_httpaddress + "admin/index.php" $(location).attr('href',url); } // End of Okay Button Function } //--- End of Dialog Button Script });//--- End of Dialog Function } else { $_messageConsole.slideDown(); $_messageConsole.html(data.message); } }, 'json'); }, //--- End of Delete Button Function 'Cancel': function() { $(this).dialog('close'); } //--- End of Cancel Button Function } //--- End of Dialog Button Script }); //--- End of Dialog Script }); //--- End of Dialog Function return false; }); Thank you for you assistant, if you choose to help.

    Read the article

  • unable to calculate textfield values

    - by user1726508
    i am trying to change the input field when users changes the quantity of items in a text field. Here i am iterating my list from my database. Now i have to make invoice for customer. In my code , if i am changing quantity of a single item, then it is effecting all the other items in the list. I want to change only to the specific items,where its quantity has been change. Below code is giving me error. It is changing all the items value on single change of item quantity. my code; <script type="text/javascript"> $(document).ready(function(){ $(function() { $('input[name="quantity"]').change(function() { var unitprice = $('input[name^="unitprice"]').val(); $(this).parents('tr').find('input[name^="price"]').val($(this).val() * unitprice); }); }); }); </script> <tr> <td height="65%" valign="top" width="100%"> <table width="100%" height="100%" border="0" cellpadding="0" cellspacing="0"> <s:iterator value="#session.BOK" status="userStatus"> <tr style="height: 10px;"> <td width="65%" align="left"><s:property value="bookTitile"/></td> <td width="10%" align="left"><s:textfield name="unitprice" value="%{price}" size="4"/></td> <td width="10%" align="center"><s:textfield name="quantity" value="%{quantity}" size="2"/></td> <td width="15%" align="center"><s:textfield name="price" size="6"></s:textfield> </td> </tr> </s:iterator> </table> </td> </tr> output looks like this image...

    Read the article

  • .htaccess mod_rewrite URL query

    - by 1001001
    I was hoping someone could help me out. I'm building a CRM application and need help modifying the .htaccess file to clean up the URLs. I've read every post regarding .htaccess and mod_rewrite and I've even tried using http://www.generateit.net/mod-rewrite/ to obtain the results with no success. Here is what I am attempting to do. Let's call the base URL www.domain.com We are using php with a mysql back-end and some jQuery and javascript In that "root" folder is my .htaccess file. I'm not sure if I need a .htaccess file in each subdirectory or if one in the root is enough. We have several actual directories of files including "crm", "sales", "finance", etc. First off we want to strip off all the ".php" extensions which I am able to do myself thanks to these posts. However, the querying of the company and contact IDs are where I am stuck. Right now if I load www.domain.com/crm/companies.php it displays all the companies in a list. If I click on one of the companies it uses javascript to call a "goto_company(x)" jQuery script that writes a form and submit that form based on the ID (x) of the company. This works fine and keeps the links clean as all the end user sees is www.domain.com/crm/company.php. However you can't navigate directly to a company. So we added a few lines in PHP to see if the POST is null and try a GET instead allowing us to do www.domain.com/crm/company.php?companyID=40 which displays company #40 out of the database. I need to rewrite this link, and all other associated links to www.domain.com/crm/company/40 I've tried everything and nothing seems to work. Keep in mind that I need to do this for "contacts" and also on the sales portion of the app will need to do something for "deals". To summarize here's what I am looking to do: Change www.domain.com/crm/dash.php to www.domain.com/crm/dash Change www.domain.com/crm/company.php?companyID=40 to www.domain.com/crm/company/40 Change www.domain.com/crm/contact.php?contactID=27 to www.domain.com/crm/contact/27 Change www.domain.com/sales/dash.php to www.domain.com/sales/dash Change www.domain.com/sales/deal.php?dealID=6 to www.domain.com/sales/deal/6 (40, 27, and 6 are just arbitrary numbers as examples) Just for reference, when I used the generateit.net/mod-rewrite site using www.domain.com/crm/company.php?companyID=40 as an example, here is what it told me to put in my .htaccess file: Options +FollowSymLinks RewriteEngine On RewriteRule ^crm/company/([^/]*)$ /crm/company.php?companyID=$1 [L] Needless to say that didn't work.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >