Search Results

Search found 31372 results on 1255 pages for 'google charts api'.

Page 222/1255 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • How can I create an SPF record on my 1and1.com hosted domain?

    - by tnorthcutt
    Emails from my domain (hosted at 1and1, and using Google Apps Premier edition) have sporadically been going to recipients' spam folders lately. I did some research, tested, and found out that I do not have an SPF record for my domain. According to this Google Support page, I need to create one. Following the steps on that page is easy, until I get to #3: Create a TXT record containing this text: v=spf1 include:_spf.google.com ~all I see no way to create a "TXT record". Here is a screenshot of the admin panel:

    Read the article

  • Buying a custom domain for blogger

    - by John Demetriou
    I am about to move my blogger site to a custom domain. I do all the steps as told but whenever I find the perfect custom domain (that is free) I get redirected to google apps for bussines... Is it a necessity to get Google apps for business before buying a custom domain? If I only start a free trial of Google apps for business when the trial period expires will my custom domain domain still be valid?

    Read the article

  • Methodology to understanding JQuery plugin & API's developed by third parties

    - by Taoist
    I have a question about third party created JQuery plug ins and API's and the methodology for understanding them. Recently I downloaded the JQuery Masonry/Infinite scroll plug in and I couldn't figure out how to configure it based on the instructions. So I downloaded a fully developed demo, then manually deleted everything that wouldn't break the functionality. The code that was left allowed me to understand the plug in much greater detail than the documentation. I'm now having a similar issue with a plug in called JQuery knob. http://anthonyterrien.com/knob/ If you look at the JQuery Knob readme file it says this is working code: $(function() { $('.dial') .trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); But as far as I can tell it isn't at all. The read me also says the Plug in uses Canvas. I am wondering if I am suppose to wrap this code in a canvas context or if this functionality is already part of the plug in. I know this kind of "question" might not fit in here but I'm a bit confused on the assumptions around reading these kinds of documentation and thought I would post the query regardless. Curious to see if this is due to my "newbi" programming experience or if this is something seasoned coders also fight with. Thank you. Edit In response to Tyanna's reply. I modified the code and it still doesn't work. I posted it below. I made sure that I checked the Google Console to insure the basics were taken care of, such as not getting a read-error on the library. <!DOCTYPE html> <meta charset="UTF-8"> <title>knob</title> <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/hot-sneaks/jquery-ui.css" type="text/css" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" charset="utf-8"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.21/jquery-ui.min.js"></script> <script src="js/jquery.knob.js"></script> <div id="button1">test </div> <script> $(function() { $("#button1").click(function () { $('.dial').trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); }); </script>

    Read the article

  • How to find first cached date and time of a website?

    - by John Sanjay
    I found some of my webpage contents has been copied in other website. I need to check the cache date and time of both pages so that i can compare and guess which page is original and which one is duplicate according to Google. Because i know that if Google found two webpages with same content, it consider the first cached page by Google bot as unique. Is there any online tool for checking that? or other ways?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Reassessment: What's a good analytics package to use for tracking user behavior in a native iOS app?

    - by BeachRunnerJoe
    Hello. I've been poking around google and SO for answers on this, but it doesn't seem to be very well discussed, so I thought I revisit the question. Is anyone using any analytics packages (like Google Analytics or Mixpanel) to track user behavior in their native iOS apps? The three I've come across are Flurry, Mixpanel, and Google Analytics. It sounds like Apple is still peeved at Flurry, so I don't want to mess with that. Mixpanel looks simple and easy to use, but I'd first like to hear from someone who has used it. Same goes with Google Analytics for the iPhone. I've just finished building an iPhone game and I'd like to begin tweaking it based on how the users are playing it. Does anyone have any recommendations or experience with any of these analytics packages? Thanks so much!

    Read the article

  • The Power of Specialization – google ads for SOA & BPM Specialized Partners

    - by JuergenKress
    For SOA & BPM specialized partner we offer free google advertisement to promote your Oracle SOA & BPM service offerings on your website or your SOA & BPM events. We will host the complete campaign management. To create your google campaign please send us below: Your campaign text: 3 lines of text each 35 letters (NOT more letters!) Your campaign link: direct link to the website you want to promote Ideas for the website which we will promote with the google ads: Your Oracle SOA Service offerings with concrete offering e.g. SOA Discovery Workshop Oracle SOA Specialized Logo Your Oracle SOA References Your SOA Implementation consultant with pictures Your SOA sales contact persons Example of an SOA Specialization text ad: Oracle SOA Specialized plan to become more agile? eProseed the Oracle SOA Experts An interview with Griffiths Waite's Business Development Director highlighting the benefits of the Oracle Specialization Programme. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Specializaion,Benefits Specialization,marketing,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Google Apps Marketplace, les applications les plus téléchargées sont de plus en plus complexes : gestion de projet, CRM, ERP, finances

    Google Apps Marketplace, les applications les plus téléchargées Elles sont de plus en plus complexes : gestion de projet, CRM, ERP, finances Le dernier Marketplace en date de Google, la galerie d'applications professionnelles complémentaires des Google Apps (Google Docs, Google Site, Blogger, Gmail, Agenda, etc), est un succès. C'est en tout cas l'avis de Google qui communique aujourd'hui les premiers résultats significatifs depuis le lancement officiel de cette galerie. ...

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • How to search the web for programming related solutions?

    - by Bob
    I have the impression that Google has become unusable when searching for programming related questions. Example: I'm Googling for XML-RPC Redstone Cookie I'm expecting results where all three terms are contained. I don't care for results where one term misses. I guess until some months ago Google just worked this way, i.e. all terms were included. Somehow this feature is gone now (Google apparently thinks it is more intelligent than the user and knows what the user is searching for). So I helped myself putting a + in front of every word. This is, however, a bit cumbersome. And for the last weeks, it even doesn't work anymore in all cases, Google ignores the +. So how do you search for progamming related problems? Do you still use Google? If yes, which techniques do you use to get the right results? Or do you use another search engine? Which one?

    Read the article

  • Web Speech API franchit un nouveau cap, la spécification JavaScript permettra d'intégrer la reconnaissance vocale dans les pages Web

    La spécification Web Speech API franchit un nouveau cap l'API JavaScript permet d'intégrer la reconnaissance vocale dans les pages Web La spécification Web Speech API vient de franchir une étape importante dans sa normalisation. Le groupe de travail Web Speech API du W3C a récemment publié le futur standard avec un appel des membres pour un accord de la spécification finale. Cette spécification décrit une API JavaScript qui permettra aux développeurs d'intégrer la reconnaissance vocale dans les pages Web. Grâce à cette API, les développeurs pourront utiliser des Scripts pour générer du texte à partir des paroles, utiliser la reconnaissance vocale comme entrée pour l...

    Read the article

  • Changing hosts but keep old emails [closed]

    - by LDaniel
    Possible Duplicate: Change host / keep emails OK here's the situation...I'm trying to transfer my domain and email address to a new hosting service and I would like to start using Google's domain apps for my email, etc. My email address is currently on a WebMail type platform and when I move my domain and start using Google domain apps I would also like to keep the old emails and have them imported the the email address on Google. Note: The email address will be the same on both hosts. For example [email protected]. So its keeping the same address between two different systems. I've been Googling how to do this for awhile and all the migrate options that come up don't appear in the the Google inbox setting or domain apps config settings. Any help is greatly appreciated.

    Read the article

  • Googling query from anywhere

    - by Shagun
    Some times when I am trying out stuff on my terminal, I get some blockade or some error to resolve which I have to copy the error or last message then open my browser and then Google the query. Is it possible for me that I select the message via mouse and when I right click I get a Google it option as I get when I select some text on a browser? PS : I am not asking how to do Google search via terminal or how to browse web via terminal.These were the results I got when I goggled my question. what I want is something more general. Whenever I select a piece of text I should get a "Google me" option with the right click of my mouse(or maybe with pressing of a key or something). I would prefer that the query takes place in a browser and not in a terminal.

    Read the article

  • Using oauth2_access_token to get connections in linkedIn

    - by Pedro
    I'm trying to get the connections in linkedIn using their API, but when I try to retrieve the connections I get a 401 unauthorized error. in the official documentation says You must use an access token to make an authenticated call on behalf of a user Make the API calls You can now use this access_token to make API calls on behalf of this user by appending "oauth2_access_token=access_token" at the end of the API call that you wish to make. The API call that I'm trying to do is the following Error -- http://api.linkedin.com/v1/people/~/connections:(id,headline,first-name,last-name)?format=json&oauth2_access_token=access_token I have tried to do it with the following endpoint without any problems. OK -- https://api.linkedin.com/v1/people/~:(id,first-name,last-name,formatted-name,date-of-birth,industry,email-address,location,headline,picture-urls::(original))?format=json&oauth2_access_token=access_token this list of endpoints for the connections API are described here http://developer.linkedin.com/documents/connections-api I just copied and pasted one endpoint from there, so the question is what's the problem with the endpoint for getting the connections? what am I missing? EDIT: for the preAuth Url I'm using https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=ConsumerKey&scope=r_fullprofile%20r_emailaddress%20r_network&state&state=NewGuid&redirect_uri=Encoded_Url https://www.linkedin.com/uas/oauth2/accessToken?grant_type=authorization_code&code=QueryString_Code&redirect_uri=EncodedCallback&client_id=ConsummerKey&client_secret=ConsumerSecret please find attached the login screen requesting the permissions EDIT2: Switched to https and worked like a charm!

    Read the article

  • How do to check to see if the user has a Google account.

    - by skrat
    Is there any safe way to detect, on a web page, client side (JS), whether user has an Google/Yahoo/Live/? account? I know about some suspicious ways to do this by styling visited links and then sneaking on computed style attribute, but it's more of a hack, Mozilla and maybe other are planning to crack down on this, as it might be abused. But I need this to allow users more integration with their identity providers, like: Have a Google account? ~ load contacts for sharing from Google Contacts API Have an Yahoo account? ~ load contacts for sharing from Yahoo Contacts API none of the above? show no link I don't want to provide all these options to all visitors, would be nice if I can detect the account, and provide integration only in that case.

    Read the article

  • How do I get a delete trigger working using fluent API in CTP5?

    - by user668472
    I am having trouble getting referential integrity dialled down enough to allow my delete trigger to fire. I have a dependent entity with three FKs. I want it to be deleted when any of the principal entities is deleted. For principal entities Role and OrgUnit (see below) I can rely on conventions to create the required one-many relationship and cascade delete does what I want, ie: Association is removed when either principal is deleted. For Member, however, I have multiple cascade delete paths (not shown here) which SQL Server doesn't like, so I need to use fluent API to disable cascade deletes. Here is my (simplified) model: public class Association { public int id { get; set; } public int roleid { get; set; } public virtual Role role { get; set; } public int? memberid { get; set; } public virtual Member member { get; set; } public int orgunitid { get; set; } public int OrgUnit orgunit { get; set; } } public class Role { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Member { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Organization { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } My first run at fluent API code looks like this: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) { DbDatabase.SetInitializer<ConfDB_Model>(new ConfDBInitializer()); modelBuilder.Entity<Member>() .HasMany(m=>m.assocations) .WithOptional(a=>a.member) .HasForeignKey(a=>a.memberId) .WillCascadeOnDelete(false); } My seed function creates the delete trigger: protected override void Seed(ConfDB_Model context) { context.Database.SqlCommand("CREATE TRIGGER MemberAssocTrigger ON dbo.Members FOR DELETE AS DELETE Assocations FROM Associations, deleted WHERE Associations.memberId = deleted.id"); } PROBLEM: When I run this, create a Role, a Member, an OrgUnit, and an Association tying the three together all is fine. When I delete the Role, the Association gets cascade deleted as I expect. HOWEVER when I delete the Member I get an exception with a referential integrity error. I have tried setting ON CASCADE SET NULL because my memberid FK is nullable but SQL complains again about multiple cascade paths, so apparently I can cascade nothing in the Member-Association relationship. To get this to work I must add the following code to Seed(): context.Database.SqlCommand("ALTER TABLE dbo.ACLEntries DROP CONSTRAINT member_aclentries"); As you can see, this drops the constraint created by the model builder. QUESTION: this feels like a complete hack. Is there a way using fluent API for me to say that referential integrity should NOT be checked, or otherwise to get it to relax enough for the Member delete to work and allow the trigger to be fired? Thanks in advance for any help you can offer. Although fluent APIs may be "fluent" I find them far from intuitive.

    Read the article

  • Javascript and Twitter API rate limitation? (Changing variable values in a loop)

    - by Pablo
    Hello, I have adapted an script from an example of http://github.com/remy/twitterlib. It´s a script that makes one query each 10 seconds to my Twitter timeline, to get only the messages that begin with a musical notation. It´s already working, but I don´t know it is the better way to do this... The Twitter API has a rate limit of 150 IP access per hour (queries from the same user). At this time, my Twitter API is blocked at 25 minutes because the 10 seconds frecuency between posts. If I set up a frecuency of 25 seconds between post, I am below the rate limit per hour, but the first 10 posts are shown so slowly. I think this way I can guarantee to be below the Twitter API rate limit and show the first 10 posts at normal speed: For the first 10 posts, I would like to set a frecuency of 5 seconds between queries. For the rest of the posts, I would like to set a frecuency of 25 seconds between queries. I think if making somewhere in the code a loop with the previous sentences, setting the "frecuency" value from 5000 to 25000 after the 10th query (or after 50 seconds, it´s the same), that´s it... Can you help me on modify this code below to make it work? Thank you in advance. var Queue = function (delay, callback) { var q = [], timer = null, processed = {}, empty = null, ignoreRT = twitterlib.filter.format('-"RT @"'); function process() { var item = null; if (q.length) { callback(q.shift()); } else { this.stop(); setTimeout(empty, 5000); } return this; } return { push: function (item) { var green = [], i; if (!(item instanceof Array)) { item = [item]; } if (timer == null && q.length == 0) { this.start(); } for (i = 0; i < item.length; i++) { if (!processed[item[i].id] && twitterlib.filter.match(item[i], ignoreRT)) { processed[item[i].id] = true; q.push(item[i]); } } q = q.sort(function (a, b) { return a.id > b.id; }); return this; }, start: function () { if (timer == null) { timer = setInterval(process, delay); } return this; }, stop: function () { clearInterval(timer); timer = null; return this; }, empty: function (fn) { empty = fn; return this; }, q: q, next: process }; }; $.extend($.expr[':'], { below: function (a, i, m) { var y = m[3]; return $(a).offset().top y; } }); function renderTweet(data) { var html = ''; html += ''; html += twitterlib.ify.clean(data.text); html += ''; since_id = data.id; return html; } function passToQueue(data) { if (data.length) { twitterQueue.push(data.reverse()); } } var frecuency = 10000; // The lapse between each new Queue var since_id = 1; var run = function () { twitterlib .timeline('twitteruser', { filter : "'?'", limit: 10 }, passToQueue) }; var twitterQueue = new Queue(frecuency, function (item) { var tweet = $(renderTweet(item)); var tweetClone = tweet.clone().hide().css({ visibility: 'hidden' }).prependTo('#tweets').slideDown(1000); tweet.css({ top: -200, position: 'absolute' }).prependTo('#tweets').animate({ top: 0 }, 1000, function () { tweetClone.css({ visibility: 'visible' }); $(this).remove(); }); $('#tweets p:below(' + window.innerHeight + ')').remove(); }).empty(run); run();

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Oracle HRMS API – Create or Update Employee Phone

    - by PRajkumar
    API --  hr_phone_api.create_or_update_phone   Example -- DECLARE        ln_phone_id                              PER_PHONES.PHONE_ID%TYPE;        ln_object_version_number    PER_PHONES.OBJECT_VERSION_NUMBER%TYPE; BEGIN    -- Create or Update Employee Phone Detail    -- -----------------------------------------------------------     hr_phone_api.create_or_update_phone     (   -- Input data elements         -- -----------------------------         p_date_from                             => TO_DATE('13-JUN-2011'),         p_phone_type                          => 'W1',         p_phone_number                   => '9999999',         p_parent_id                              => 32979,         p_parent_table                         => 'PER_ALL_PEOPLE_F',         p_effective_date                       => TO_DATE('13-JUN-2011'),         -- Output data elements         -- --------------------------------         p_phone_id                              => ln_phone_id,         p_object_version_number    => ln_object_version_number      );    COMMIT; EXCEPTION       WHEN OTHERS THEN                     ROLLBACK;                      dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

  • Problem using Flot charts on a jQtouch web site

    - by hairbymaurice
    Hiiii I have a jQtouch site in dev and i would like to use a chart on it, to me Flot looks like the best way to do this (prettiest!) However if i implement flot on the site i get the following error: Invalid dimensions for plot, width = 0, height = 0 If i comment out the style sheet ../jqtouch/jqtouch.min.css the flot chart works just fine. This i think has something to do with the fact that you cannot use flot inside a div that has display:none From the Flot readme Blockquote Make sure that the placeholder isn't within something with a display:none CSS property - in that case, Flot has trouble measuring label dimensions which results in garbled looks and might have trouble measuring the placeholder dimensions which is fatal (it'll throw an exception). Does anyone now if i can work around this/fix this so flot and jQtouch work together? Thanks Hairby

    Read the article

  • Mike Cohn-style burndown charts in JIRA

    - by Fuzzy Purple Monkey
    We use Jira/Greenhopper for our project planning/tracking. The built-in graphs are great, but when a new issue/story is added during a project, the whole burn-down graph moves up rather than increasing the part of the graph when the issue was added. Ideally I'd like to generate a Mike Cohn-style burndown graph, which shows a hump when new issues are added. Does anyone know of any plugins that support this, or been able to extract this data directly from the database?

    Read the article

  • Replacing GTileLayer in Google Maps v3, with ImageMapType, Tile bounding box?

    - by justdev
    I need to update this code: radar_layer.getTileUrl=function(tile,zoom) { var llp = new GPoint(tile.x*256,(tile.y+1)*256); var urp = new GPoint((tile.x+1)*256,tile.y*256); var ll = G_NORMAL_MAP.getProjection().fromPixelToLatLng(llp,zoom); var ur = G_NORMAL_MAP.getProjection().fromPixelToLatLng(urp,zoom); var dt = new Date(); var nowtime = dt.getTime(); var tileurl = "http://demo.remoteservice.com/cgi-bin/serve.cgi?"; tileurl+="bbox="+ll.lng()+","+ll.lat()+","+ur.lng()+","+ur.lat(); tileurl+="&width=256&height=256&reaspect=false&cachetime="+nowtime; return tileurl; }; I got as far as: var DemoLayer = new google.maps.ImageMapType({ getTileUrl: function(coord, zoom) { var llp = new google.maps.Point(coord.x*256,(coord.y+1)*256); var urp = new google.maps.Point((coord.x+1)*256,coord.y*256); var ll = googleMap.getProjection().fromPointToLatLng(llp); var ur = googleMap.getProjection().fromPointToLatLng(urp); var dt = new Date(); var nowtime = dt.getTime(); var tileurl = "http://demo.remoteservice.com/cgi-bin/serve.cgi?"; tileurl+="bbox="+ll.lng()+","+ll.lat()+","+ur.lng()+","+ur.lat(); tileurl+="&width=256&height=256&reaspect=false&cachetime="+nowtime; return tileurl; }, tileSize: new google.maps.Size(256, 256), opacity:1.0, isPng: true }); Specifically, I need help with this section: var llp = new google.maps.Point(coord.x*256,(coord.y+1)*256); var urp = new google.maps.Point((coord.x+1)*256,coord.y*256); var ll = googleMap.getProjection().fromPointToLatLng(llp); var ur = googleMap.getProjection().fromPointToLatLng(urp); The service wants the tile bounding box from what I understand. However, ll and ur do not seem to correct at all. I had it working and displaying the entire map bounding box in each tile, but of course that's not what I need. Any insight here would be greatly appreciated, not having the GTileLayers in V3 is fine if I can work around it, until then I'm frustrated.

    Read the article

  • Date/Time formatting in .NET (Devexpress Gantt charts to show time rather than date)

    - by calico-cat
    I have some data about a day's events that I'm trying to visualise as a Gantt chart using Devexpress XtraCharts. Devexpress's example here shows the chart being populated by date. However, I'd like it to be populated by time to compare the events throughout one day. My X-axis is displaying correctly - done like so: ganttDiagram.AxisY.DateTimeMeasureUnit = DateTimeMeasurementUnit.Minute I have data with the correct time, however, the label on each series is showing the date (which are all the same, because it's all the same day!) Thus, instead of being a bar, all of them are just single points, with the label showing 31/03/2010 - 31/03/2010. Each series is created with the code below: s.Points.Add(New SeriesPoint("Machine", New DateTime() {ev.StartTime, ev.EndTime}))

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >