Search Results

Search found 11143 results on 446 pages for 'email publisher'.

Page 147/446 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Checking for Value in JS Object

    - by pagewil
    I have a JS Object like so: var ob{ 1 : { id:101, name:'john', email:'[email protected]' }, 2 : { id:102, name:'jim', email:'[email protected]' }, 3 : { id:103, name:'bob', email:'[email protected]' }, } I want to check if this JS object already contains a record. If it doesn't then I want to add it. For example. if(ob contains an id of 101){ //don't add john }else{ //add john } Catch my drift? Pretty simple question. I just want to know the best way of doing it. Thanks Guys! W.

    Read the article

  • Ajax success function firing before java class responds

    - by user1899281
    I am creating a login function with ajax and am having an issue where the success function (SuccessLogin) fires before getting an ajax response. I am running the code as google web app from eclipse and I can see when debugging the java class file, that the javascript is throwing an alert for the success response from the class being false before the debugger catches the break point in the class file. I have only been writing code for a couple months now so I am sure its a stupid little error on my part. $(document).ready(function() { sessionChecker() // sign in $('#signInForm').click(function () { $().button('loading') var email = $('#user_username').val(); sessionStorage.email = $('#user_username').val(); var password= $('#user_password').val(); var SignInRequest = { type: "UserLoginRequest", email: email, password: password } var data= JSON.stringify(SignInRequest); //disabled all the text fields $('.text').attr('disabled','true'); //start the ajax $.ajax({ url: "/resources/user/login", type: "POST", data: data, cache: false, success: successLogin(data) }); }); //if submit button is clicked $('#Register').click(function () { $().button('loading') var email = $('#email').val(); if ($('#InputPassword').val()== $('#ConfirmPassword').val()) { var password= $('input[id=InputPassword]').val(); } else {alert("Passwords do not match"); return ;} var UserRegistrationRequest = { type: "UserRegistrationRequest", email: email, password: password } var data= JSON.stringify(UserRegistrationRequest); //disabled all the text fields $('.text').attr('disabled','true'); //start the ajax $.ajax({ url: "/resources/user/register", type: "POST", data: data, cache: false, success: function (data) { if (data.success==true) { //hide the form $('form').fadeOut('slow'); //show the success message $('.done').fadeIn('slow'); } else alert('data.errorReason'); } }); return false; }); }); function successLogin (data){ if (data.success) { sessionStorage.userID= data.userID var userID = data.userID sessionChecker(userID); } else alert(data.errorReason); } //session check function sessionChecker(uid) { if (sessionStorage.userID!= null){ var userID = sessionStorage.userID }; if (userID != null){ $('#user').append(userID) $('#fat-menu_1').fadeOut('slow') $('#fat-menu_2').append(sessionStorage.email).fadeIn('slow') }; }

    Read the article

  • yahoo doesn't accpet emails i send to it

    - by hd
    i am writing a sendmail module to email some things to my site users. for testing it i use my own email at yahoo to receive this email. but some thing woeful happend. about 1200 sent to my email address at yahoo at a moment and yahoo sent all of them to spam box. now i can't send any email to yahoo addresses and my server gives me this message in mailq: "delivery temporarily suspended: host g.mx.mail.yahoo.com[98.137.54.238] refused to talk to me..." how can i solve this problem?? many users of my site have yahoo email address. my server uses postfix. thanks for helping .

    Read the article

  • Is there a method I can use across controllers and if so, how do I use it?

    - by Angela
    I have several controllers that take an instance of different classes each (Email, Call, Letter, etc) and they all have to go through this same substitution: @email.message.gsub!("{FirstName}", @contact.first_name) @email.message.gsub!("{Company}", @contact.company_name) @email.message.gsub!("{Colleagues}", @colleagues.to_sentence) @email.message.gsub!("{NextWeek}", (Date.today + 7.days).strftime("%A, %B %d")) @email.message.gsub!("{ContactTitle}", @contact.title ) So, for example, @call.message for Call, @letter.message for Letter, etcetera. This isn't very dry. I'd like to have something like def messagesub(asset) @asset.message.gsub.... end or something like that so I can just use messagesub method in each controller.

    Read the article

  • How do I create the Controller for a Single Table Inheritance in Rails?

    - by Angela
    I am setting up the Single Table Inheritance, using ContactEvent as the Model that ContactEmail, ContactLetter, and ContactCall will all inherit. But I'm stumped on how to create the routing and the controller. For example, let's say I want to create a new ContactEvent with type Email. I would like a way to do the following: new_contact_event_path(contact, email) This would take the instance from Contact model and from Email model. Inside, I would imagine the contact_event_controller would need to know... @contact_event.type = (params[:email]) # get the type based on what was passed in? @contact_event.event_id = (params[:email]) #get the id for the correct class, in this case Email.id Just not sure how this works....

    Read the article

  • How should I organize my C# classes? [closed]

    - by oscar.fimbres
    I'm creating an email generator system. I'm creating some clases and I'm trying to make things right. By the time, I have created 5 classes. Look at the class diagram: I'm going to explain you each one. Person. It's not a big deal. Just have two constructors: Person(fname, lname1, lname2) and Person(token, fname, lname1, lname2). Note that email property stays without value. StringGenerator. This is a static class and it has only a public function: Generate. The function receives a Person class and it will return a list of patterns for the email. MySql. It contains all the necessary to connect to a database. Database. This class inherits from MySql class. It has particular functions for the database. This gets all the registries from a table (function GetPeople) and return a List. Each person from the list contains all data except Email. Also it can add records (List but this must contains an available email). An available email is when an email doesn't have another person. For that reason, I have a method named ExistsEmail. Container. This is the class which is causing me some problems. It's like a temporary container. It supposed to have a people list from GetPeople (in Database class) and for each person it adds, it must generate a list of possible names (StringGenerator.Generate), then it selects one of the list and it must check out if exists in the database or in the same container. As I told above this is temporal, it may none of the possible emails is available. So the user can modify or enter a custom email available and update the list in this container. When all the email's people are available, it sends a list to add in the database, It must have a Flush method, to insert all the people in the database. I'm trying to design correct class. I need a little help to improve or edite the classes, because I want to separate the logic and visual, and learn of you. I hope you've been able to understand me. Any question or doubt, please let me know. Anyway, I attached the solution here to better understand it: http://www.megaupload.com/?d=D94FH8GZ

    Read the article

  • crashing on iPhone Address API

    - by phil swenson
    Any ideas on why this code would crash (crash location indicated below)? email is a valid NSString*... ([email protected]) +(void)newContactFromEmail:(DetailViewController*)controller email:(NSString*)emailAddress{ ABNewPersonViewController *npvc = [[ABNewPersonViewController alloc] init]; ABRecordRef newPerson = ABPersonCreate(); [self updateEmail:newPerson email:emailAddress]; npvc.displayedPerson = newPerson; npvc.newPersonViewDelegate = controller; [controller.navigationController pushViewController:npvc animated:YES]; } +(void)updateEmail:(ABRecordRef)person email:(NSString*)email{ **crashes Here**---->> ABMutableMultiValueRef multiEmail = ABMultiValueCreateMutableCopy (ABRecordCopyValue(person, kABPersonEmailProperty)); ABMultiValueAddValueAndLabel(multiEmail, email, kABHomeLabel, NULL); ABRecordSetValue(person, kABPersonEmailProperty, multiEmail, nil); CFRelease(multiEmail); }

    Read the article

  • sendmail.php needs some php work

    - by Chris
    I am having the hardest time to get a simple sendmail.php to work. My form html is <form action=sendmail.php id=contact-form method=post> <p> <label for=cf_name>Name *</label> <input id=cf_name name=cf_name placeholder='Enter your name...' required=required title=Name type=text /> </p> <p> <label for=cf_email>Email *</label> <input id=cf_email name=cf_email placeholder='Email address...' required=required title='Email address' type=email /> </p> <p> <label for=cf_subject>Subject *</label> <input id=cf_subject name=cf_subject placeholder='Specify subject...' required=required title=Subject type=text /> </p> <p> <label for=cf_message>Message *</label> <textarea id=cf_message name=cf_message placeholder='Message text...' required=required rows=10 title='Message text'></textarea> </p> <p> <input type=submit value='Send message'/> </p> </form> And my mailer script is: <? $cf_email = $_POST['cf_email'] ; $cf_message = $_POST['cf_message'] ; $cf_subject = $_POST['cf_subject'] ; $cf_name = $_POST['cf_name'] ; mail( "[email protected]", $cf_subject, $cf_message, $cf_name, $cf_email ); print "Congratulations your email has been sent"; ?> Just want an email to to go to my email. When it appears in the inbox, the subject they typed is the one that I will see as the subject in my inbox. The from will be their name. The email it came from will be their email And the message inside will be the message they wrote in the form. Please help.

    Read the article

  • managedQuery - trying to make 2 data points show when clicked

    - by fitz
    Here is what I have now: ListAdapter buildPhonesAdapter(Activity a) { String[] PROJECTION=new String[] { Contacts._ID, Contacts.DISPLAY_NAME, Phone.NUMBER, Email.DATA }; Cursor c=a.managedQuery(Phone.CONTENT_URI, PROJECTION, null, null, null); return(new SimpleCursorAdapter( a, android.R.layout.simple_list_item_2, c, new String[] { Contacts.DISPLAY_NAME, Phone.NUMBER, Email.DATA }, new int[] { android.R.id.text1, android.R.id.text2 })); } ListAdapter buildEmailAdapter(Activity a) { String[] PROJECTION=new String[] { Contacts._ID, Contacts.DISPLAY_NAME, Email.DATA }; Cursor c=a.managedQuery(Email.CONTENT_URI, PROJECTION, null, null, null); return(new SimpleCursorAdapter( a, android.R.layout.simple_list_item_2, c, new String[] { Contacts.DISPLAY_NAME, Email.DATA }, new int[] { android.R.id.text1, android.R.id.text2 })); } Need the 2 variables to show when cursor is picking this new option - I need Email.CONTENT_URI, and Phone.CONTENT_URI, to show when picked - I can get each one to show but need them both to show at same time.

    Read the article

  • How can I use edit_in_place in the show of a different model in Rails?

    - by Angela
    I have a model Campaign and the campaign/show goes through a loop of the Emails. Campaign has_many Emails. <h2>Emails to Send Today</h2> <% for email in @campaign.emails %> <p><strong>Email: </strong><%= link_to email.title, email_path(email) %> sent after <%= distance_of_time_in_words(email.days.days) %></p> <% end %> I would like to be able to edit in place the subject and/or the email.days value from the Campaign/show page. How do I do that? (Added complexity, these are clickable links).

    Read the article

  • How to return null value if the query has no corresponding value?

    - by Holicreature
    Hi i've a query select c.name as companyname, u.name,u.email,u.role,a.date from useraccount u, company c, audittrial a where u.status='active' and u.companyid=c.id and (u.companyid=a.companyID and a.activity like 'User activated%' and a.email=u.email) order by u.companyid desc limit 10 So if the following part doesnt't satisfy, (u.companyid=a.companyID and a.activity like 'User activated%' and a.email=u.email) no rows will be returned.. but i want to return the result of the following query select c.name as companyname, u.name,u.email,u.role,a.date from useraccount u, company c, audittrial a where u.status='active' and u.companyid=c.id order by u.companyid desc limit 10 but to add that, i should return the date if available and return null value if date is not available.. how can i do this?

    Read the article

  • Error in header function

    - by user1178695
    I'm using the php header function for the redirection but it is not working.I'm using the following code. $sql=mysql_query("select * from password where username='$email' and password1 = '$pwd'"); //echo "selct * from password where username='$email' and password = '$pwd'"; $row=mysql_fetch_row($sql); $fieldset=mysql_num_rows($sql); $host=$_SERVER['HTTP_HOST']."/beta/"; if($fieldset>0 && $conEmail !="") { $_SESSION['email']=$email; $_SESSION['Email']=$email; $_SESSION['memberID']=$id; $_SESSION['status']='Admin'; header("location:http://".$host."member.php"); }

    Read the article

  • Accessing current class through $this-> from a function called statically. [PHP]

    - by MQA
    This feels a bit messy, but I'd like to be able to call a member function statically, yet have the rest of the class behave normally... Example: <?php class Email { private $username = 'user'; private $password = 'password'; private $from = '[email protected]'; public $to; public function SendMsg($to, $body) { if (isset($this)) $email &= $this; else $email = new Email(); $email->to = $to; // Rest of function... } } Email::SendMsg('[email protected]'); How best do I allow the static function call in this example? Thanks!

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • How do I send an email with embedded images AND regular attachments in JavaMail?

    - by Chris
    Hi, I'd like to know how to build an SMTP multipart message in the correct order so that it will render correctly on the iPhone mail client (rendering correctly in GMail). I'm using Javamail to build up an email containing the following parts: A body part with content type "text/html; UTF-8" An embedded image attachment. A file attachment I am sending the mail via GMail SMTP (via SSL) and the mail is sent and rendered correctly using a GMail account, however, the mail does not render correctly on the iPhone mail client. On the iPhone mail client, the image is rendered before the "Before Image" text when it should be rendered afterwards. After the "Before Image" text there is an icon with a question mark (I assume it means it couldn't find the referenced CID). I'm not sure if this is a limitation of the iPhone mail client or a bug in my mail sending code (I strongly assume the latter). I think that perhaps the headers on my parts might by incorrect or perhaps I am providing the multiparts in the wrong order. I include the text of the received mail as output by gmail (which renders the file correc Message-ID: <[email protected]> Subject: =?UTF-8?Q?Test_from_=E3=82=AF=E3=83=AA=E3=82=B9?= MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_0_20870565.1274154021755" ------=_Part_0_20870565.1274154021755 Content-Type: application/octet-stream Content-Transfer-Encoding: base64 Content-ID: <20100518124021763_368238_0> iVBORw0K ----- TRIMMED FOR CONCISENESS 6p1VVy4alAAAAABJRU5ErkJggg== ------=_Part_0_20870565.1274154021755 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit <html><head><title>Employees Favourite Foods</title> <style> body { font: normal 8pt arial; } th { font: bold 8pt arial; white-space: nowrap; } td { font: normal 8pt arial; white-space: nowrap; } </style></head><body> Before Image<br><img src="cid:20100518124021763_368238_0"> After Image<br><table border="0"> <tr> <th colspan="4">Employees Favourite Foods</th> </tr> <tr> <th align="left">Name</th><th align="left">Age</th><th align="left">Tel.No</th><th align="left">Fav.Food</th> </tr> <tr style="background-color:#e0e0e0"> <td>Chris</td><td>34</td><td>555-123-4567</td><td>Pancakes</td> </tr> </table></body></html> ------=_Part_0_20870565.1274154021755 Content-Type: text/plain; charset=us-ascii; name=textfile.txt Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename=textfile.txt This is a textfile with numbers counting from one to ten beneath this line: one two three four five six seven eight nine ten(no trailing carriage return) ------=_Part_0_20870565.1274154021755-- Even if you can't assist me with this, I would appreciate it if any members of the forum could forward me a (non-personal) mail that includes inline images (not external hyperlinked images though). I just need to find a working sample then I can move past this. Thanks, Chris.

    Read the article

  • Why does this C++ code result in a segmentation fault?

    - by user69514
    I keep getting a segmentation fault when the readAuthor() method is called. Does anybody know why this happens? I am supposed to use dynamic arrays, I know this would be so easy if I was using static array. #include <iostream> #include <string> #include <cstring> #include <cstdlib> using namespace std; /** declare arrays **/ int* isbnArr = new int[25]; char* authorArr = new char[25]; char* publisherArr = new char[25]; char* titleArr = new char[25]; int* editionArr = new int[25]; int* yearArr = new int[25]; int* pagesArr = new int[25]; float* retailPriceArr = new float[25]; float* discountedPriceArr = new float[25]; int* stockArr = new int[25]; /** function prototypes **/ int readIsbn(); char* readAuthor(); char* readPublisher(); char* readTitle(); int readEdition(); int readYear(); int readPages(); float readMsrp(); float readDiscountedPrice(); int readStockAmount(); void readonebook(int* isbn, char* author, char* title, char* publisher, int* edition, int* year, int* pages, float* msrp, float* discounted, int* inventory); int main() { bool stop = false; //flag when to stop loop int ind = 0; //index for current book while( !stop ){ cout << "Add book: press A: "; cout << "another thing here "; char choice; cin >> choice; if( choice == 'a' || choice == 'A' ){ readonebook(&isbnArr[ind], &authorArr[ind], &titleArr[ind], &publisherArr[ind], &editionArr[ind], &yearArr[ind], &pagesArr[ind], &retailPriceArr[ind], &discountedPriceArr[ind], &stockArr[ind]); test(&authorArr[ind]); ind++; } } return 0; } /** define functions **/ int readIsbn(){ int isbn; cout << "ISBN: "; cin >> isbn; return isbn; } char* readAuthor(){ char* author; cout << "Author: "; cin >> author; return author; } char* readPublisher(){ char* publisher = NULL; cout << "Publisher: "; cin >> publisher; return publisher; } char* readTitle(){ char* title = NULL; cout << "Title: "; cin >> title; return title; } int readEdition(){ int edition; cout << "Edition: "; cin >> edition; return edition; } int readYear(){ int year; cout << "Year: "; cin >> year; return year; } int readPages(){ int pages; cout << "Pages: "; cin >> pages; return pages; } float readMsrp(){ float price; cout << "Retail Price: "; cin >> price; return price; } float readDiscountedPrice(){ float price; cout << "Discounted Price: "; cin >> price; return price; } int readStockAmount(){ int amount; cout << "Stock Amount: "; cin >> amount; return amount; } void readonebook(int* isbn, char* author, char* title, char* publisher, int* edition, int* year, int* pages, float* msrp, float* discounted, int* inventory){ *isbn = readIsbn(); author = readAuthor(); title = readTitle(); publisher = readPublisher(); *edition = readEdition(); *year = readYear(); *pages = readPages(); *msrp = readMsrp(); *discounted = readDiscountedPrice(); *inventory = readStockAmount(); }

    Read the article

  • reCaptcha integration with php

    - by Neil Bradley
    Hi there, I'm building a contact us page that also uses a reCaptcha, but im having a few issues with it. I fill in all of the fields in the contact form and the correct reCaptcha words, but the form does not submit. I'm assuming this is something to do with the validation, but wondered if someone might be able to spot where i'm going wrong? The PHP code at the top of my page looks like this; <?php include('includes/session.php'); $err = ''; $success = ''; if(isset($_POST["docontact"]) && $_POST["docontact"] == "yes") { //get form details $form = new stdClass(); $form->name = sanitizeOne($_POST["name"], "str"); $form->email = sanitizeOne($_POST["email"], "str"); $form->phone = sanitizeOne($_POST["phone"], "str"); $form->mysevenprog = sanitizeOne($_POST["mysevenprog"], "str"); $form->enquiry = sanitizeOne($_POST["enquiry"], "str"); $form->howfindsite = sanitizeOne($_POST["howfindsite"], "str"); //Check for errors (required: name, email, enquiry) if($form->name == "") { $err .= '<p class="warning">Please enter your name!</p>'; } if($form->email == "") { $err .= '<p class="warning">Please enter your email address!</p>'; } if($form->enquiry == "") { $err .= '<p class="warning">Please supply an enquiry message!</p>'; } //Send Email if($err == "") { $mailer = new BlueMailer(); $mailer->AddAddress(Configuration::getVar("developer_email"), Configuration::getVar("admin_email_name")); include('templates/email/contact-us-admin.php'); if(!$mailer->Send()) { $err .= "<p>There was an error sending submitting your request!, Please try again later."; } else { $success = 'thanks'; } } } else { //Initialise empty variables $form = new stdClass(); $form->name = ""; $form->email = ""; $form->phone = ""; $form->mysevenprog = ""; $form->enquiry = ""; $form->howfindsite = ""; } ?> And then in the body of my page I have the form as follows; <?php if($err != "") : ?> <div class="error"> <?php echo $err; ?> </div> <?php endif; ?> <?php if($success == 'thanks') : ?> <h3>Thank you for your enquiry</h3> <p>Your enquiry has been successfully sent. Someone will contact you shortly.</p> <?php else: ?> <h3>If you are looking to advertise with us, have some feedback about some of our programming or want to say 'Hi' please use the fields below</h3> <form name="contactus" id="contactus" method="post" action="<?php echo $_SERVER['SCRIPT_NAME'] ?>"> <ul> <li><label for="name">Your name: *</label> <input name="name" id="name" class="textbox" style="width: 75%;" type="text" value="<?php echo $form->name ?>" /></li> <li><label for="email">Email address: *</label> <input name="email" id="email" class="textbox" style="width: 75%;" type="text" value="<?php echo $form->email ?>" /></li> <li><label for="phone">Telephone:</label> <input name="phone" id="phone" class="textbox" style="width: 75%;" type="text" value="<?php echo $form->phone ?>" /></li> <li><label for="mysevenprog">My Seven programme</label> <input name="mysevenprog" class="textbox" style="width: 75%;" type="text" value="<?php echo $form->mysevenprog ?>" /></li> <li><label for="enquiry">Enquiry/Message: *</label> <textarea name="enquiry" class="textarea" rows="5" cols="30" style="width: 75%;" id="enquiry"><?php echo $form->enquiry ?></textarea></li> <li><label for="howfindsite">How did you find out about our site?</label> <input name="howfindsite" id="howfindsite" class="textbox" style="width: 75%;" type="text" value="<?php echo $form->howfindsite ?>" /></li> <li> <?php require_once('recaptchalib.php'); // Get a key from http://recaptcha.net/api/getkey $publickey = "6LcbbQwAAAAAAPYy2EFx-8lFCws93Ip6Vi5itlpT"; $privatekey = "6LcbbQwAAAAAAPV_nOAEjwya5FP3wzL3oNfBi21C"; # the response from reCAPTCHA $resp = null; # the error code from reCAPTCHA, if any $error = null; # was there a reCAPTCHA response? if ($_POST["recaptcha_response_field"]) { $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if ($resp->is_valid) { echo "You got it!"; } else { # set the error code so that we can display it $error = $resp->error; } } echo recaptcha_get_html($publickey, $error); ?> </li> <li><input type="submit" value="Submit Form" class="button" /></li> </ul> <input type="hidden" name="docontact" value="yes" /> </form> <?php endif; ?>

    Read the article

  • [MINI HOW-TO] How To Use Bcc (Blind Carbon Copy) in Outlook 2010

    - by Mysticgeek
    If you want to send an email to a contact or several contacts, you might want to keep some of the recipient email addresses private using the Bcc (Blind Carbon Copy) Field. Here’s how to do it in Outlook 2010. It’s not enabled by default, but adding it as a field for all future emails is a simple process. Launch Outlook and under the Home tab click on the New E-mail button. When the new mail window opens click on the Options tab and in the Show Fields column select Bcc. The Bcc field will appear and you can then put the contacts in there who you want to receive the mail secretly or don’t want to show a certain email address. Now anytime you compose a message, the Bcc field is included. For more on the Bcc field check out the blog post from Mysticgeek – Keep Your Email Contacts Private. Similar Articles Productive Geek Tips How To Switch Back to Outlook 2007 After the 2010 Beta EndsOpen Different Outlook Features in Separate Windows to Improve ProductivityThursday’s Pre-Holiday Lazy Links RoundupCreate an Email Template in Outlook 2003Change Outlook Startup Folder TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Productivity Tips

    - by Brian T. Jackett
    A few months ago during my first end of year review at Microsoft I was doing an assessment of my year.  One of my personal goals to come out of this reflection was to improve my personal productivity.  While I hear many people say “I wish I had more hours in the day so that I could get more done” I feel like that is the wrong approach.  There is an inherent assumption that you are being productive with your time that you already have and thus more time would allow you to be as productive given more time.    Instead of wishing I could add more hours to the day I’ve begun adopting a number of processes or behavior changes in my personal life to make better use of my time with the goal of improving productivity.  The areas of focus are as follows: Focus Processes Tools Personal health Email Note: A number of these topics have spawned from reading Scott Hanselman’s blog posts on productivity, reading of David Allen’s book Getting Things Done, and discussions with friends and coworkers who had great insights into this topic.   Focus Pre-reading / viewing: Overcome your work addiction Millennials paralyzed by choice Its Not What You Read Its What You Ignore (Scott Hanselman video)    I highly recommend Scott Hanselman’s video above and this post before continuing with this article.  It is well worth the 40+ mins price of admission for the video and couple minutes for article.  One key takeaway for me was listing out my activities in an average week and realizing which ones held little or no value to me.  We all have a finite amount of time to work each day.  Do you know how much time and effort you spend on various aspects of your life (family, friends, religion, work, personal happiness, etc.)?  Do your actions and commitments reflect your priorities?    The biggest time consumers with little value for me were time spent on social media services (Twitter and Facebook), playing an MMO video game, and watching TV.  I still check up on Facebook, Twitter, Microsoft internal chat forums, and other services to keep contact with others but I’ve reduced that time significantly.  As for TV I’ve cut the cord and no longer subscribe to cable TV.  Instead I use Netflix, RedBox, and over the air channels but again with reduced time consumption.  With the time I’ve freed up I’m back to working out 2-3 times a week and reading 4 nights a week (both of which I had been neglecting previously).  I’ll mention a few tools for helping measure your time in the Tools section.   Processes    Do not multi-task.  I’ll say it again.  Do not multi-task.  There is no such thing as multi tasking.  The human brain is optimized to work on one thing at a time.  When you are “multi-tasking” you are really doing 2 or more things at less than 100%, usually by a wide margin.  I take pride in my work and when I’m doing something less than 100% the results typically degrade rapidly.    Now there are some ways of bending the rules of physics for this one.  There is the notion of getting a double amount of work done in the same timeframe.  Some examples would be listening to podcasts / watching a movie while working out, using a treadmill as your work desk, or reading while in the bathroom.    Personally I’ve found good results in combining one task that does not require focus (making dinner, playing certain video games, working out) and one task that does (watching a movie, listening to podcasts).  I believe this is related to me being a visual and kinesthetic (using my hands or actually doing it) learner.  I’m terrible with auditory learning.  My fiance and I joke that sometimes we talk and talk to each other but never really hear each other.   Goals / Tasks    Goals can give us direction in life and a sense of accomplishment when we complete them.  Goals can also overwhelm us and give us a sense of failure when we don’t complete them.  I propose that you shift your perspective and not dwell on all of the things that you haven’t gotten done, but focus instead on regularly setting measureable goals that are within reason of accomplishing.    At the end of each time frame have a retrospective to review your progress.  Do not feel guilty about what you did not accomplish.  Feel proud of what you did accomplish and readjust your goals for the next time frame to more attainable goals.  Here is a sample schedule I’ve seen proposed by some.  I have not consistently set goals for each timeframe, but I do typically set 3 small goals a day (this blog post is #2 for today). Each day set 3 small goals Each week set 3 medium goals Each month set 1 large goal Each year set 2 very large goals   Tools    Tools are an extension of our human body.  They help us extend beyond what we can physically and mentally do.  Below are some tools I use almost daily or have found useful as of late. Disclaimer: I am not getting endorsed to promote any of these products.  I just happen to like them and find them useful. Instapaper – Save internet links for reading later.  There are many tools like this but I’ve found this to be a great one.  There is even a “read it later” JavaScript button you can add to your browser so when you navigate to a site it will then add this to your list. Stacks for Instapaper – A Windows Phone 7 app for reading my Instapaper articles on the go.  It does require a subscription to Instapaper (nominal $3 every three months) but is easily worth the cost.  Alternatively you can set up your Kindle to sync with Instapaper easily but I haven’t done so. SlapDash Podcast – Apps for Windows Phone and  Windows 8 (possibly other platforms) to sync podcast viewing / listening across multiple devices.  Now that I have my Surface RT device (which I love) this is making my consumption easier to manage. Feed Reader – Simple Windows 8 app for quickly catching up on my RSS feeds.  I used to have hundreds of unread items all the time.  Now I’m down to 20-50 regularly and it is much easier and faster to consume on my Surface RT.  There is also a free version (which I use) and I can’t see much different between the free and paid versions currently. Rescue Time – Have you ever wondered how much time you’ve spent on websites vs. email vs. “doing work”?  This service tracks your computer actions and then lets you report on them.  This can help you quantitatively identify areas where your actions are not in line with your priorities. PowerShell – Windows automation tool.  It is now built into every client and server OS.  This tool has saved me days (and I mean the full 24 hrs worth) of time and effort in the past year alone.  If you haven’t started learning PowerShell and you administrating any Windows OS or server product you need to start today. Various blogging tools – I wrote a post a couple years ago called How I Blog about my blogging process and tools used.  Almost all of it still applies today.   Personal Health    Some of these may be common sense or debatable, but I’ve found them to help prioritize my daily activities. Get plenty of sleep on a regular basis.  Sacrificing sleep too many nights a week negatively impacts your cognition, attitude, and overall health. Exercise at least three days.  Exercise could be lifting weights, taking the stairs up multiple flights of stairs, walking for 20 mins, or a number of other "non-traditional” activities.  I find that regular exercise helps with sleep and improves my overall attitude. Eat a well balanced diet.  Too much sugar, caffeine, junk food, etc. are not good for your body.  This is not a matter of losing weight but taking care of your body and helping you perform at your peak potential.   Email    Email can be one of the biggest time consumers (i.e. waster) if you aren’t careful. Time box your email usage.  Set a meeting invite for yourself if necessary to limit how much time you spend checking email. Use rules to prioritize your emailEmail from external customers, my manager, or include me directly on the To line go into my inbox.  Everything else goes a level down and I have 30+ rules to further sort it, mostly distribution lists. Use keyboard shortcuts (when available).  I use Outlook for my primary email and am constantly hitting Alt + S to send, Ctrl + 1 for my inbox, Ctrl + 2 for my calendar, Space / Tab / Shift + Tab to mark items as read, and a number of other useful commands.  Learn them and you’ll see your speed getting through emails increase. Keep emails short.  No one Few people like reading through long emails.  The first line should state exactly why you are sending the email followed by a 3-4 lines to support it.  Anything longer might be better suited as a phone call or in person discussion.   Conclusion    In this post I walked through various tips and tricks I’ve found for improving personal productivity.  It is a mix of re-focusing on the things that matter, using tools to assist in your efforts, and cutting out actions that are not aligned with your priorities.  I originally had a whole section on keyboard shortcuts, but with my recent purchase of the Surface RT I’m finding that touch gestures have replaced numerous keyboard commands that I used to need.  I see a big future in touch enabled devices.  Hopefully some of these tips help you out.  If you have any tools, tips, or ideas you would like to share feel free to add in the comments section.         -Frog Out   Links Scott Hanselman Productivity posts http://www.hanselman.com/blog/CategoryView.aspx?category=Productivity Overcome your work addiction http://blogs.hbr.org/hbsfaculty/2012/05/overcome-your-work-addiction.html?awid=5512355740280659420-3271   Millennials paralyzed by choice http://priyaparker.com/blog/millennials-paralyzed-by-choice   Its Not What You Read Its What You Ignore (video) http://www.hanselman.com/blog/ItsNotWhatYouReadItsWhatYouIgnoreVideoOfScottHanselmansPersonalProductivityTips.aspx   Cutting the cord – Jeff Blankenburg http://www.jeffblankenburg.com/2011/04/06/cutting-the-cord/   Building a sitting standing desk – Eric Harlan http://www.ericharlan.com/Everything_Else/building-a-sitting-standing-desk-a229.html   Instapaper http://www.instapaper.com/u   Stacks for Instapaper http://www.stacksforinstapaper.com/   Slapdash Podcast Windows Phone -  http://www.windowsphone.com/en-us/store/app/slapdash-podcasts/90e8b121-080b-e011-9264-00237de2db9e Windows 8 - http://apps.microsoft.com/webpdp/en-us/app/slapdash-podcasts/0c62e66a-f2e4-4403-af88-3430a821741e/m/ROW   Feed Reader http://apps.microsoft.com/webpdp/en-us/app/feed-reader/d03199c9-8e08-469a-bda1-7963099840cc/m/ROW   Rescue Time http://www.rescuetime.com/   PowerShell Script Center http://technet.microsoft.com/en-us/scriptcenter/bb410849.aspx

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Add Your Gmail Account to Outlook 2010 using POP

    - by Matthew Guay
    Are you excited about the latest version of Outlook, and want to get it setup with your Gmail accounts?  Here’s how you can easily add your Gmail account using POP to Outlook 2010. Getting Started Log into your Gmail account an go to your settings page. Under the Forwarding and POP/IMAP tab make sure POP is enabled.  You can choose to enable POP access for all new mail that arrives from now on, or for all mail in your Gmail account.  On the second option, we suggest you chose keep Gmail’s copy in the Inbox so you can still access your emails on the Gmail server.   Add Your Account to Outlook 2010 If you haven’t run Outlook 2010 yet, click Next to start setup and add your email account. Select Yes to add an email account to Outlook.  Now you’re ready to start entering your settings to access your email. Or, if you’ve already been using Outlook and want to add a new POP account, click File and then select Add Account under Account Information.   Outlook 2010 can often automatically find and configure your account with just your email address and password, so enter these and click Next to let Outlook try to set it up automatically. Outlook will now scan for the settings for your email account. If Outlook was able to find settings and configure your account automatically, you’ll see this success screen.  Depending on your setup, Gmail is automatically setup, but sometimes it fails to find the settings.  If this is the case, we’ll go back and manually configure it. Manually Configure Outlook for Gmail Back at the account setup screen, select Manually configure server settings or additional server types and click Next. Select Internet E-mail and then click Next. Enter your username, email address, and log in information. Under Server information enter in the following: Account Type: POP3 Incoming mail server: pop.gmail.com Outgoing mail server: smtp.gmail.com Make sure to check Remember password so you don’t have to enter it every time. After that data is entered in, click on the More Settings button. Select the Outgoing Server tab, and check My outgoing server (SMTP) requires authentication.  Verify Use same settings as my incoming mail server is marked as well. Next select the Advanced tab and enter the following information: Incoming Server (POP3): 995 Outgoing server (SMTP): 587 Check This server requires an encrypted connection (SSL) Set Use the following type of encrypted connection to TLS You also might want to uncheck the box to Remove messages from the server after a number of days.  This way your messages will still be accessible from Gmail online. Click OK to close the window, and then click Next to finish setting up the account.  Outlook will test your account settings to make sure everything will work; click Close when this is finished. Provided everything was entered in correctly, you’ll be greeted with a successful setup message…click Finish.   Gmail will be all ready to sync with Outlook 2010.  Enjoy your Gmail account in Outlook, complete with fast indexed searching, conversation view, and more! Conclusion Adding Gmail using the POP setting to Outlook 2010 is usually easy and only takes a few steps.  Even if you have to enter your settings manually, it is still a fairly simple process. You can add multiple email accounts using POP3 if you wish, and if you’d like to sync IMAP accounts, check out our tutorial on setting up Gmail using IMAP in Outlook 2010. Similar Articles Productive Geek Tips Add Your Gmail To Windows Live MailAdd Your Gmail Account to Outlook 2007Use Gmail IMAP in Microsoft Outlook 2007Figure out which Online accounts are selling your email to spammersAdd Your Gmail Account to Outlook 2010 Using IMAP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Don't forget to confirm your SQLBits registration

    - by simonsabin
    If you filled in the registration form for SQLBits VI and haven't received an email requesting you to confirm your registration then you are not confirmed and we will be assuming you aren't coming. You need to get that email and click on the confirmation link to confirm your registration We currently have 30 people that have not confirmed. So if you haven't seen an email then please check your spam folders for the email. If you still don't get any luck then please contact us (contactus(at)sqlbits.com) If you are not confirmed and turn up on the day then you will only get in if we have room, and it looks like we wil be over subscribed. you can check your registration status by loggin in to the site and going to the http://www.sqlbits.com/information/Registration.aspx. Check their is a confirmation data at the bottom.

    Read the article

  • Don't forget to confirm your SQLBits registration

    - by simonsabin
    If you filled in the registration form for SQLBits VI and haven't received an email requesting you to confirm your registration then you are not confirmed and we will be assuming you aren't coming. You need to get that email and click on the confirmation link to confirm your registration We currently have 30 people that have not confirmed. So if you haven't seen an email then please check your spam folders for the email. If you still don't get any luck then please contact us (contactus(at)sqlbits.com) If you are not confirmed and turn up on the day then you will only get in if we have room, and it looks like we wil be over subscribed. you can check your registration status by loggin in to the site and going to the http://www.sqlbits.com/information/Registration.aspx. Check their is a confirmation data at the bottom.

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >