Search Results

Search found 21454 results on 859 pages for 'via'.

Page 237/859 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • How to serialize each item in IEnumerable for ajax post

    - by bflemi3
    I have a PartialView that displays IEnumerable<Movie>. _MoviePartial.cshtml @foreach(var movie in Model) { <div class="content"> <span class="image"><img src="@movie.Posters.Profile" alt="@movie.Title"/></span> <span class="title">@movie.Title</span> <div class="score`"> <span class="critics-score">@movie.Ratings.CriticsScore</span> <span class="audience-score">@movie.Ratings.AudienceScore</span> </div> @Html.ActionLink("Add Movie", "Add", "MyMovies") </div> } When the user clicks the "Add Movie" ActionLink I am going to do an ajax post to add the selected movie to the users collection. My problem is that I would like to send the entire selected Movie class to the "Add" action but not sure how to serialize each movie since the entire Movie class is not rendered in the PartialView, just a few properties. I know I can serialize something like this... <script type="text/javascript"> var obj = @Html.Raw(Json.Encode(movie)); </script> But I'm not sure how that would work inside a foreach loop that renders html, especially inside a PartialView. So, just to be clear, when a user clicks the "Add Movie" ActionLink I would like to send the serialized Movie class for that respective movie to my controller via ajax. My questions is... Is there a better way to serialize each movie and append it to it's respective anchor? I know there's the data- html5 attribute but I thought they only allow string values, not json objects. I also know I could use jQuery's .data() function but I'm struggling to think through how to get that to run from a PartialView, especially since the html rendered by _MoviePartial.cshtml may be returned from a controller via ajax.

    Read the article

  • Is it okay to violate the principle that collection properties should be readonly for performance?

    - by uriDium
    I used FxCop to analyze some code I had written. I had exposed a collection via a setter. I understand why this is not good. Changing the backing store when I don't expect it is a very bad idea. Here is my problem though. I retrieve a list of business objects from a Data Access Object. I then need to add that collection to another business class and I was doing it with the setter method. The reason I did this was that it is going to be faster to make an assignment than to insert hundreds of thousands of objects one at a time to the collection again via another addElement method. Is it okay to have a getter for a collection in some scenarios? I though of rather having a constructor which takes a collection? I thought maybe I could pass the object in to the Dao and let the Dao populate it directly? Are there any other better ideas?

    Read the article

  • HTTPS causes jQuery to ignore request

    - by Josh
    I have an odd bug, this jQuery code executes correctly when calling the page via HTTP, but once I connect to the page via HTTPS it doesn't execute. The code basically tracks when a link is clicked. <html> <head> <title>Test Page</title> <script type="text/javascript" src="/scripts/jquery-1.4.2.min.js"></script> <script type="text/javascript"> $(document).ready( function() { $('.fbspb').click(function() { $.get("/services/lt.ashx?ac=fbspb"); return true; }); }); </script> </head> <body> <a href="http://www.facebook.com" class="fbspb" target="_blank">Facebook</a> </body> </html> I've tried updating the URL in the get use a full HTTPS path with no success. No error is raised when I try to HTTPS.

    Read the article

  • Using the standard OBJECT tag, how can I display a java applet with automatic prompts to install Java and with fallback content?

    - by CB
    This is the code i'm currently using: (note - %s is replaced on the server side) <!--[if !IE]>--> <object type="application/x-java-applet" width="300" height="300" > <!--<![endif]--> <!--[if IE]> <object classid="clsid:8AD9C840-044E-11D1-B3E9-00805F499D93" codebase="http://java.sun.com/update/1.6.0/jinstall-6u22-windows-i586.cab" type="application/x-java-applet" width="300" height="300" > <!--><!-- <![endif]--> <param name="codebase" value="/media/vnc/" > <param name="archive" value="TightVncViewer.jar" /> <param name="code" value="com.tightvnc.vncviewer.VncViewer" /> <param name="port" value="%s" /> <param name="Open New Window" value="yes" /> </object> When Java is installed, this works perfectly in both IE and Firefox. When Java is not installed, IE and Firefox both correctly prompt for an autodownload of Java 1.6 from the codebase line. (IE via the activex url given firefox via the Plugin Finder Service) Now, suppose I want fallback content to be shown if the plugin isn't installed, say a simple message like "Get Java". From reading the specs, i'd assume this should not change the plugin finding prompt - that is, rendering the fallback should be seen as a failure to render the object tag. Thus, I should still get the plugin finder service prompting me to install Java. Instead, simply adding a single character to the innerHTML of the object element causes Firefox to no longer prompt. Test this by visiting data:text/html,<object type='application/x-java-applet'>Java failed to load</object>. How can I keep firefox prompting to install Java while providing fallback content? URL to test Firefox's Java Plugin Finder Service: data:text/html,<object type='application/x-java-applet'/>

    Read the article

  • fastest public web app framework for quick DB apps?

    - by Steve Eisner
    I'd like to pick up a new tech for my toolbox - something for rapid prototyping of web apps. Brief requirements: public access (not hosted on my machine) - like Google's appengine, etc no tricky configuration necessary to build a simple web app host DB access (small storage provided) including some kind of SQLish query language easy front end HTML templating ability to access as a JSON service C# or Java,PHP or Python - or a fun new language to learn is OK free! An example app, very simple: render an AJAXy editable (add/delete/edit/drag) list of rich-data list items via some template language, so I can quickly mock up a UI for a client. ie. I can do most of the work client-side, but need convenient back end to handle the permanent storage. (In fact I suppose it doesn't even need HTML templating if I can directly access a DB via AJAX calls.) I realize this is a bit vague but am wondering if anyone has recommendations. A Rails host might be best for this (but probably not free) or maybe App Engine, or some other choice I'm not aware of? I've been doing everything with heavyweight servers (ASP.NET etc) for so long that I'm just not up on the latest... Thanks - I'll follow up on comments if this isn't clear enough :)

    Read the article

  • .htaccess redirect noticably increasing load time

    - by GTCrais
    I set up a SEF link via .htaccess RewriteRule to one of the articles on my website just to see how that works, and it does work but it considerably increases the load time of that particular page. On average the articles (including the one I'm talking about, when not using the rewrite rule) load in about 1.3 seconds. With the rewrite rule, the load time is 3.3 seconds on average until the page displays, and the loader thingy in the firefox tab keeps spinning for another 2 seconds. I have WAMP setup on my computer, and the website is being accessed through no-ip.com. Here is the .htaccess config (very simple, as you can see): Options +FollowSymLinks RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^o-sw-liji /NewSWL/o-nama.php?body=o-sw-liji In httpd.conf I have this (somewhere I read this might affect the load time for some reason - searching for files through all the directories or something, I don't remember exactly what I read): <Directory /> Options None AllowOverride None Order deny,allow Deny from all </Directory> DocumentRoot "Z:/Program Files (x86)/wamp/www/" <Directory "Z:/Program Files (x86)/wamp/www/"> Options None AllowOverride All Order allow,deny Allow from all </Directory> Any ideas why .htaccess redirect increases the load time by so much? UPDATE: so I put a session based counter in the "o-nama.php" script. Apparently when I access the web via the 'normal' link i.e. 'o-nama.php?body=o-sw-liji', the counter increases by one, as it should - it's one page load. But when the page is accessed through the redirected link, i.e. 'o-nama/o-sharewood-liji' the counter increases by 6-8, which naturally makes the load time a lot longer, since it's loading the same page for 6-8 times. I have no idea why this is happening. Any help is appreciated.

    Read the article

  • Symfony 1.4 Form Value change after getValues()

    - by bertzzie
    Hi, I've got some problem with symfony form value (i guess it's the clean value, but not so clear yet). Here's the problem : I got a sfFormDateJQueryUI widget setup like this in my form : $this->setWidgets(array( 'needDate' => new sfWidgetFormDateJQueryUI(), )); $this->setValidators(array( 'needDate' => new sfValidatorDate(array( 'required' => true, 'date_format' => '/^[0-9]{2}\/[0-9]{2}\/[0-9]{4}$/', 'date_output' => 'd/m/Y' )), )); Then when i submit, say 26/06/2010, it turns out right in the HTTP Header (viewed via Firebug) and $request (i just print it). But after i get the value via $formVal = $form->getValues(); the date value in $formVal["needDate"] become today's date (03/06/2010). I really don't understand, and after checking in the API documentation it says that the getValues will return the 'cleaned' value. Is that because of it ? I don't understand what's 'clean'. Thanks before..

    Read the article

  • Need for J2me source code

    - by tikamchandrakar
    For J2me It strikes me as odd that you need an extra "api key" and so on. But actually, what I really want is NOT create an extra facebook application that needs to be registered on Facebook. I don't want to create any extra configuration effords necessary for the user of my application to undergo. All my user should need is his well-known login data for facebook. Everything else should be completely transparent to him. So, I thought maybe would u can do the login process, creating a request to the REST server via http. I know this would provide me with an XML. I hope that the this API will somehow automatically transform that XML into an intuitive object model that represents the facebook user data of the respective user. So, I would expect something like userData = new FacebookData(new FacebookConnection("user_name", "password")). Done. If you get, what I mean. No api key. No secret key. Just the well-known login data. Practically, the equivalent to thunderbird webmail, which allows you to access your MSN hotmail account via Thunderbird. Thunderbird webmail will automatically converts the htmls obtained from a hotmail browser login into the data structure usually passed on to a mail client. Hope you get what I mean. I was expecting the equilalent for the your API.

    Read the article

  • How do you send email from IMAP account with PHP?

    - by arthurakay
    I'm having an issue sending email via PHP/IMAP - and I don't know if it's because: I don't correctly understand IMAP, or there's an issue with my server My application opens an IMAP connection to an email account to read messages in the inbox. It does this successfully. The problem I have is that I want to send messages from this account and have them display in the outbox/sent folder. As far as I can tell, the PHP imap_mail() function doesn't in any way hook into the IMAP stream I currently have open. My code executes without throwing an error. However, the email never arrives to the recipient and never displays in my sent folder. private function createHeaders() { return "MIME-Version: 1.0" . "\r\n" . "Content-type: text/html; charset=iso-8859-1" . "\r\n" . "From: " . $this->accountEmail . "\r\n"; } private function notifyAdminForCompleteSet($urlToCompleteSet) { $message = " <p> In order to process the latest records, you must visit <a href='$urlToCompleteSet'>the website</a> and manually export the set. </p> "; try { imap_mail( $this->adminEmail, "Alert: Manual Export of Records Required", wordwrap($message, 70), $this->createHeaders() ); echo(" ---> Admin notified via email!\n"); } catch (Exception $e) { throw new Exception("Error in notifyAdminForCompleteSet()"); } } I'm guessing I need to copy the message into the IMAP account manually... or is there a different solution to this problem? Also, does it matter if the domain in the "from" address is different than that of the server on whicn this script is running? I can't explain why the message is never sent.

    Read the article

  • Only first word of two strings gets added to db

    - by dkgeld
    When trying to add words to a database via php, only the first word of both strings gets added. I send the text via this code: public void sendTextToDB() { valcom = editText1.getText().toString(); valnm = editText2.getText().toString(); t = new Thread() { public void run() { try { url = new URL("http://10.0.2.2/HB/hikebuddy.php?function=setcomm&comment="+valcom+"&name="+valnm); h = (HttpURLConnection)url.openConnection(); if( h.getResponseCode() == HttpURLConnection.HTTP_OK){ is = h.getInputStream(); }else{ is = h.getErrorStream(); } h.disconnect(); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); Log.d("Test", "CONNECTION FAILED 1"); } } }; t.start(); } When tested with spaces and commas etc. in a browser, the php function adds all text. The strings also return the full value when inserted into a dialog. How do I fix this? Thank you.

    Read the article

  • AJAX Uploading - Not waiting for response before continuing

    - by waxical
    I'm using Blueimp's jQuery Uploader (very good it is too btw) and an S3 handler to upload files and then transfer them to S3 via the S3 API (from the PHP SDK). It works. The problem is, on large files (1GB) it can take anything up to a a few minutes to transfer (via create-object) onto S3. The PHP file that does this is hung-up until this process is complete. The problem is, the uploader (which utilises the jQuery Ajax method) seems to give up waiting and start again everytime. I have thought this was related to PHP INI 'max_input_time' or such, as it seemed to wait around 60 seconds, though this now appears to vary. I have upped the max_input_time in PHP INI and others related - but no further. I've also considered (the more likely) that JS, either in the script or the jQuery method has a timeout. The developer (blueimp) has said there's no such timeout in the front-end script, nor have I seen any and though 'timeout' is referenced in the jQuery Ajax method options, it seems to affect the entire time it uploads rather than the wait for a response - so that's not much use. Any help or guidance gratefully received.

    Read the article

  • How to check if a thread is busy in C#?

    - by Sam
    I have a Windows Forms UI running on a thread, Thread1. I have another thread, Thread2, that gets tons of data via external events that needs to update the Windows UI. (It actually updates multiple UI threads.) I have a third thread, Thread3, that I use as a buffer thread between Thread1 and Thread2 so that Thread2 can continue to update other threads (via the same method). My buffer thread, Thread3, looks like this: public class ThreadBuffer { public ThreadBuffer(frmUI form, CustomArgs e) { form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); } } What I would like to do is for my ThreadBuffer to check whether my form is currently busy doing previous updates. If it is, I'd like for it to wait until it frees up and then invoke the UpdateUI(e). I was thinking about either: a) //PseudoCode while(form==busy) { // Do nothing; } form.Invoke((MethodInvoker)delegate { form.UpdateUI(e); }); How would I check the form==busy? Also, I am not sure that this is a good approach. b) Create an event in form1 that will notify the ThreadBuffer that it is ready to process. // psuedocode List<CustomArgs> elist = new List<CustomArgs>(); public ThreadBuffer(frmUI form, CustomArgs e) { from.OnFreedUp += from_OnFreedUp(); elist.Add(e); } private form_OnFreedUp() { if (elist.count == 0) return; form.Invoke((MethodInvoker)delegate { form.UpdateUI(elist[0]); }); elist.Remove(elist[0]); } In this case, how would I write an event that will notify that the form is free? and c) an other ideas?

    Read the article

  • How can I secure my $_GETs in PHP?

    - by ggfan
    My profile.php displays all the user's postings,comments,pictures. If the user wants to delete, it sends the posting's id to the remove.php so it's like remove.php?action=removeposting&posting_id=2. If they want to remove a picture, it's remove.php?action=removepicture&picture_id=1. Using the get data, I do a query to the database to display the info they want to delete and if they want to delete it, they click "yes". So the data is deleted via $POST NOT $GET to prevent cross-site request forgery. My question is how do I make sure the GETs are not some javascript code, sql injection that will mess me up. here is my remove.php //how do I make $action safe? //should I use mysqli_real_escape_string? //use strip_tags()? $action=trim($_GET['action']); if (($action != 'removeposting') && ($action != 'removefriend') && ($action != 'removecomment')) { echo "please don't change the action. go back and refresh"; header("Location: index.php"); exit(); } if ($action == 'removeposting') { //get the info and display it in a form. if user clicks "yes", deletes } if ($action =='removepicture') { //remove pic } I know I can't be 100% safe, but what are some common defenses I can use. EDIT Do this to prevent xss $action=trim($_GET['action']); htmlspecialchars(strip_tags($action)); Then when I am 'recalling' the data back via POST, I would use $posting_id = mysqli_real_escape_string($dbc, trim($_POST['posting_id']));

    Read the article

  • How to make html-files with content to be used in a simple ajax site to behave nicely in google?

    - by metatron
    I made some ajax sites in the past where I used ajax to get more of a desktop application feeling for my sites and also to keep the site maintainable. My strategy was making one index page and from there pulling in html content from some subpages. (So far I didn't use ajax to send data to the server.) The problem that I ran into is this: I want the subpages to be readable by google since they contain valuable content but once they show up in google's results they lead to the naked html-file (no css nor Javascript). I solved this by putting a javascript redirect (window.location = ...) on the subpages so they lead to the correct page. So as an example let's say I have a site at example.com with some javascript and css and a naked content page that should be loaded via ajax: example.com/content.html. Via ajax I pull in what I need from the content file but since my index.html contains href's to the content.html file (I want the content of my ajax site to be readable without Javascript) it will be indexed by google and gets listed in the search results. But I don't want people to see the naked html file. Hence the redirect that goes to the index page and gets handled by some Javascript to show the content as I want it to be showed. I was wondering if there are nicer solutions to this problem or different approaches.

    Read the article

  • Expressjs route param as variable in main app

    - by MoDFoX
    For my app I have two route set up, app.get('/', routes.index); app.get('/:name', routes.index); I would like it to be so that if I don't specify a param, say just go to appurl.com (localhost:3000), it would load a default user, but if I do specify a param(localhost:3000/user), use that as the variable "username" in the following function (placed after my routes). (function getUser(){ var body = '', username = 'WillsonSM', options = { host: 'ws.audioscrobbler.com', port: 80, path: '/2.0/?method=user.gettopartists&user=' + username + '&format=json&limit=20&api_key=APIKEYGOESHERE' }; require('http').request(options, function(res) { res.setEncoding('utf8'); res.on('data', function(chunk) { body += chunk; }); res.on('end', function() { body = JSON.parse(body); artists = body.topartists.artist; }); }).end(); })(); Along with this I have my route set up like so: exports.index = function(req, res){ res.render('index', { title: 'LasTube' }); username = req.params.name; console.log(username); }; unfortunately setting username there to req.params.name does not seem to be accessible from the main app function. My question is: How can I set expressjs/nodejs to use the parameter set via /name when available, and just use a default - in this example "WillsonSM" if not available. I've tried taking "username" out of the main app, and just leaving it in the function, but username becomes undefined, as it is inaccessible from the route, and the app will not run. I can spit out "username" via the routes console.log, so assigning it there is not an issue, but as I am new to expressjs, I am unaware of how I should go about doing this. I have tried all I can think of and find from looking around the internet. Also, if there is a better way of doing this, or I am doing something wrong, please let me know. If I've left out any information, just throw in a comment and I'll try to address it.

    Read the article

  • Create Rails model with argument of associated model?

    - by Kyle Carlson
    I have two models, User and PushupReminder, and a method create_a_reminder in my PushupReminder controller (is that the best place to put it?) that I want to have create a new instance of a PushupReminder for a given user when I pass it a user ID. I have the association via the user_id column working correctly in my PushupReminder table and I've tested that I can both create reminders & send the reminder email correctly via the Rails console. Here is a snippet of the model code: class User < ActiveRecord::Base has_many :pushup_reminders end class PushupReminder < ActiveRecord::Base belongs_to :user end And the create_a_reminder method: def create_a_reminder(user) @user = User.find(user) @reminder = PushupReminder.create(:user_id => @user.id, :completed => false, :num_pushups => @user.pushups_per_reminder, :when_sent => Time.now) PushupReminderMailer.reminder_email(@user).deliver end I'm at a loss for how to run that create_a_reminder method in my code for a given user (eventually will be in a cron job for all my users). If someone could help me get my thinking on the right track, I'd really appreciate it. Thanks!

    Read the article

  • Can you write files in Chrome 8?

    - by greggory.hz
    I'm wondering if, with the new File API exposed in Chrome (I'm not concerned with cross-browser support at this time), it would be possible to write back to files opened via a file input. You can see an example of what I'm trying to accomplish here: http://www.grehz.com/ide. I know I can use server side scripts to dynamically create the files and allow the user to download them normally. I'm hoping that there's a way to accomplish this purely client side. I had read somewhere that you can write to files opened via a file input. I haven't been able to find any examples of this, though I have seen passing references to a FileWriter class. I would be completely not surprised if this wasn't possible though (it seems likely that there are security issues with this). Just looking for some guidance or resources. UPDATE: I was reading here: http://dev.w3.org/2009/dap/file-system/file-writer.html As I was playing around in Chrome, it looks like FileSaver and FileWriter are not implemented, but BlobBuilder is. I can call getBlob() on the BB object, is there any way I can then save that without FileSave or FileWriter?

    Read the article

  • Image switch based on if a layer is visible

    - by Zuno
    I have a website that contains multiple pages as layers (not as separate HTML files). I have three images: <img src="image1.png" onclick="showlayer(1);return false;" /> <br /> <img src="image2.png" onclick="showlayer(2);return false;" /> <br /> <img src="image3.png" onclick="showlayer(3);return false;" /> When an image is clicked, it shows the relevant layer and hides the others. I want it to also change the image to image1_active.png / image2_active.png / image3_active.png depending on which layer is visible (not via the onclick event handler). Why not via the onclick event handler?... Layer 1 is set as visible by default in the CSS, so image1 needs to be image1_active.png by default too - since the user has not had to click on anything yet, this is why I need the image switch to detect the layer's visibility/display to change the image. The showlayer script is: function showlayer(n){ for(i=1;i<=3;i++){document.getElementById("layer"+i).style.display="none";document.getElementById("layer"+n).style.display="block"; }} Is it possible to adapt this script for this purpose? thank you

    Read the article

  • IIS7 Failure after installing Advanced Logging

    - by Guy Harwood
    I came across a nasty issue when i installed the Advanced Logging feature for IIS7 via the Web Platform Installer on my Windows 2008 Server.  Basically, after installation and reboot none of my sites were working and returned 503 – Internal Server Error. Snooping around in the Event Viewer i found the following error reported by the W3SVC… The Module DLL C:\Program Files\IIS\Advanced Logging\AdvancedLoggingModule.dll failed to load. The data is the error Even though the DLLs are there, it is not picking them up. I managed to find a fix via google that involves editing the configapplicationHost.config file in the C:\Windows\System32\inetsrv\ directory. 1.  Copy AdvancedLoggingModule.dll and ClientLoggingHandler.dll to %windir%\system32 (C:\windows\system32  on a default setup) 2.  Locate the file C:\Windows\System32\inetsrv\configapplicationHost.config and make a backup, then open it in a text editor (i recommend Notepad++). 3.  Search for the following 2 lines (mine are located on line 570).. <add name="ClientLoggingHandler" image="%ProgramFiles%\IIS\Advanced Logging\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%ProgramFiles%\IIS\Advanced Logging\AdvancedLoggingModule.dll" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } and alter them to…. <add name="ClientLoggingHandler" image="%windir%\system32\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%windir%\system32\AdvancedLoggingModule.dll" /> 4. Open a command prompt and run iisReset. 5. All sites should now be working. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Add Your Gmail Account to Outlook 2010 Using IMAP

    - by Mysticgeek
    If you’re upgrading from Outlook 2003 to 2010, you might want to use IMAP with your Gmail account to synchronize mail across multiple machines. Using our guide, you will be able to start using it in no time. Enable IMAP in Gmail First log into your Gmail account and open the Settings panel. Click on the Forwarding and POP/IMAP tab and verify IMAP is enabled and save changes. Next open Outlook 2010, click on the File tab to access the Backstage view. Click on Account Settings and Add and remove accounts or change existing connection settings. In the Account Settings window click on the New button. Enter in your name, email address, and password twice then click Next. Outlook will configure the email server settings, the amount of time it takes will vary. Provided everything goes correctly, the configuration will be successful and you can begin using your account. Manually Configure IMAP Settings If the above instructions don’t work, then we’ll need to manually configure the settings. Again, go into Auto Account Setup and select Manually configure server settings or additional server types and click Next.   Select Internet E-mail – Connect to POP or IMAP server to send and receive e-mail messages. Now we need to manually enter in our settings similar to the following. Under the Server Information section verify the following. Account Type: IMAP Incoming mail server: imap.gmail.com Outgoing mail server (SMTP): smtp.gmail.com Note: If you have a Google Apps account make sure to put the full email address ([email protected]) in the Your Name and User Name fields. Note: If you live outside of the US you might need to use imap.googlemail.com and smtp.googlemail.com Next, we need to click on the More Settings button… In the Internet E-mail Settings screen that pops up, click on the Outgoing Server tab, and check the box next to My outgoing server (SMTP) requires authentication. Also select the radio button next to Use same settings as my incoming mail server. In the same window click on the Advanced tab and verify the following. Incoming server: 993 Incoming server encrypted connection: SSL Outgoing server encrypted connection TLS Outgoing server: 587 Note: You will need to change the Outgoing server encrypted connection first, otherwise it will default back to port 25. Also, if TLS doesn’t work, we were able to successfully use Auto. Click OK when finished. Now we want to test the settings, before continuing on…it’s just easier that way incase something was entered incorrectly. To make sure the settings are tested, check the box Test Account Settings by clicking the Next button. If you’ve entered everything in correctly, both tasks will be completed successfully and you can close out of the window. and begin using your account via Outlook 2010. You’ll get a final congratulations message you can close out of… And begin using your account via Outlook 2010. Conclusion Using IMAP allows you to synchronize email across multiple machines and devices. The IMAP feature in Gmail is free to use, and this should get you started using it with Outlook 2010. If you’re still using 2007 or just upgraded to it, check out our guide on how to use Gmail IMAP in Outlook 2007. Similar Articles Productive Geek Tips Add Your Gmail To Windows Live MailForce Outlook 2007 to Download Complete IMAP ItemsUse Gmail IMAP in Microsoft Outlook 2007Prevent Outlook with Gmail IMAP from Showing Duplicate Tasks in the To-Do BarSetting up Gmail IMAP Support for Windows Vista Mail TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • CRM 2011 - Workflows Vs JavaScripts

    - by Kanini
    In the Contact entity, I have the following attributes Preferred email - A read only field of type Email Personal email 1 - An email field Personal email 2 - An email field Work email 1 - An email field Work email 2 - An email field School email - An email field Other email - An email field Preferred email option - An option set with the following values {Personal email 1, Personal email 2, Work email 1, Work email 2, School email and Other email). None of the above mentioned fields are required. Requirement When user picks a value from Preferred email option, we copy the email address available in that field and apply the same in the Preferred email field. Implementation The Solution Architect suggested that we implement the above requirement as a Workflow. The reason he provided was - most of the times, these values are to be populated by an external website and the data is then fed into CRM 2011 system. So, when they update Preferred email option via a Web Service call to CRM, the WF will run and updated the Preferred email field. My argument / solution What will happen if I do not pick a value from the Preferred email Option Set? Do I set it to any of the email addresses that has a value in it? If so, what if there is more than one of the email address fields are populated, i.e., what if Personal email 1 and Work email 1 is populated but no value is picked in the Option Set? What if a value existed in the Preferred email Option Set and I then change it to NULL? Should the field Preferred email (where the text value of email address is stored) be set to Read Only? If not, what if I have picked Personal email 1 in the Option Set and then edit the Preferred email address text field with a completely new email address If yes, then we are enforcing that the preferred email should be one among Personal email 1, Personal email 2, Work email 1, Work email 2, School email or Other email [My preference would be this] What if I had a value of [email protected] in the personal email 1 field and personal email 2 is empty and choose value of Personal email 1 in the drop down for Preferred email (this will set the Preferred email field to [email protected]) and later, I change the value to Personal email 2 in the Preferred email. It overwrites a valid email address with nothing. I agree that it would be highly unlikely that a user will pick Preferred email as Personal email 2 and not have a value in it but nevertheless it is a possible scenario, isn’t it? What if users typed in a value in Personal email 1 but by mistake picked Personal email 2 in the option set and Personal email 2 field had no value in it. Solution The field Preferred email option should be a required field A JS should run whenever Preferred email option is changed. That JS function should set the relevant email field as required (based on the option chosen) and another JS function should be called (see step 3). A JS function should update the value of Preferred email with the value in the email field (as picked in the option set). The JS function should also be run every time someone updates the actual email field which is chosen in the option set. The guys who are managing the external website should update the Preferred email field - surely, if they can update Preferred email option via a Web Service call, it is easy enough to update the Preferred email right? Question Which is a better method? Should it be written as a JS or a WorkFlow? Also, whose responsibility is it to update the Preferred email field when the data flows from an external website? I am new to CRM 2011 but have around 6 years of experience as a CRM consultant (with other products). I do not come from a development background as I started off as a Application Support Engineer but have picked up development in the last couple of years.

    Read the article

  • Surface Review from Canadian Guy Who Didn&rsquo;t Go To Build

    - by D'Arcy Lussier
    I didn’t go to Build last week, opted to stay home and go trick-or-treating with my daughters instead. I had many friends that did go however, and I was able to catch up with James Chambers last night to hear about the conference and play with his Surface RT and Nokia 920 WP8 devices. I’ve been using Windows 8 for a while now, so I’m not going to comment on OS features – lots of posts out there on that already. Let me instead comment on the hardware itself. Size and Weight The size of the tablet was awesome. The Windows 8 tablet I’m using to reference this against is the one from Build 2011 (Samsung model) we received as well as my iPad. The Surface RT was taller and slightly heavier than the iPad, but smaller and lighter than the Samsung Win 8 tablet. I still don’t prefer the default wide-screen format, but the Surface RT is much more usable even when holding it by the long edge than the Samsung. Build Quality No issues with the build quality, it seemed very solid. But…y’know, people have been going on about how the Surface RT materials are so much better than the plastic feeling models Samsung and others put out. I didn’t really notice *that* much difference in that regard with the Surface RT. Interesting feature I didn’t expect – the Windows button on the device is touch-sensitive, not a mechanical one. I didn’t try video or anything, so I can’t comment on the media experience. The kickstand is a great feature, and the way the Surface RT connects to the combo case/keyboard touchcover is very slick while being incredibly simple. What About That Touch Cover Keyboard? So first, kudos to Microsoft on the touch cover! This thing was insanely responsive (including the trackpad) and really delivered on the thinness I was expecting. With that said, and remember this is with very limited use, I would probably go with the Type Cover instead of the Touch Cover. The difference is buttons. The Touch Cover doesn’t actually have “buttons” on the keyboard – hence why its a “touch” cover. You tap on a key to type it. James tells me after a while you get used to it and you can type very fast. For me, I just prefer the tactile feeling of a button being pressed/depressed. But still – typing on the touch case worked very well. Would I Buy One? So after playing with it, did I cry out in envy and rage that I wasn’t able to get one of these machines? Did I curse my decision to collect Halloween candy with my kids instead of being at Build getting hardware? Well – no. Even with the keyboard, the Surface RT is not a business laptop replacement device. While Office does come included, you can’t install any other applications outside of Windows Store Apps. This might be limiting depending on what other applications you need to have available on your computer. Surface RT is a great personal computing device, as long as you’re not already invested in a competing ecosystem. I’ve heard people make statements that they’re going to replace all the iPads in their homes with Surface tablets. In my home, that’s not feasible – my wife and daughters have amassed quite a collection of games via iTunes. We also buy all our music via iTunes as well, so even with the XBox streaming music service now available we’re still tied quite tightly to iTunes. So who is the Surface RT for? In my mind, if you’re looking for a solid, compact device that provides basic business functionality (read: email) or if you have someone that needs a very simple to use computer for email, web browsing, etc., then Surface RT is a great option. For me, I’m waiting on the Samsung Ativ Smart PC Pro and am curious to see what changes the Surface Pro will come with.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >