Search Results

Search found 27152 results on 1087 pages for 'google cache'.

Page 494/1087 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Unable to access certain websites

    - by Ravindra Jadeja
    I am unable to access certain websites from my PC viz. google.com, gmail.com , stackoverflow.com, etc. However, I am able to access facebook.com, twitter.com, infoq.com etc. Currently I am accessing Google via proxy server. I suspect that the problem might exist with websites that have used ASP for scripting. Please suggest a solution to the problem that I am facing.

    Read the article

  • Couldn't attachto Firefox 3.x browser by using Browser.AttachTo<FireFox>method in WatiN 2.0 RC1

    - by Shu Yang
    I am using HTTPWatch automation API to launch a new Firefox instance like that: HttpWatch.Controller ct = new HttpWatch.Controller(); HttpWatch.Plugin plugin = ct.FireFox.New(""); plugin.GotoURL("http://www.google.com"); These codes could start a Firefox browser successfully. Then I want to control the browser in WatiN 2.0: FireFox ff = Browser.AttachTo<FireFox>(Find.ByTitle("Google")); WatiN could not find Firefox window (JSSH plugin has been added in Firefox). But the same test on IE 7 is ok. I even tried to open a Firefox window manually and visit google.com page. WaitN in IE7 could attach to the browser, but Firefox failed. Is there anything wrong with my codes? Or any other advice? Thanks in advance! Here is the config for my environment: OS: Windows XP Pro SP2 WatiN: 2.0 RC1 Browser: IE 7, Firefox 3.0/3.5/3.6 with JSSH plugin

    Read the article

  • fullCalendar className to multiple eventSources

    - by Justin
    I am trying to setup my fullCalendar event sources. instead of pulling all of my events through 1 source, I would like to use multiple sources (ie: google, and local json) Here is what I have so far (In short): eventSources: [ //CA HOLIDAYS $.fullCalendar.gcalFeed('http://www.google.com/calendar/feeds/en.canadian%23holiday%40group.v.calendar.google.com/public/basic', { className: 'holiday' }), //General events 'events.php?a=getAllCalendarEvents' ], The problem that I am having is, I can get the gCalFeed to have a className, but not exactly sure how to get my other source to have a className... Any ideas would be greatly appreciated.

    Read the article

  • Django templates crashes with no sense

    - by user233323
    Hello I'm trying to use google visualization API along with django templates system. I got an error that don't know how to fix. The error is the following: invalid_block_tag raise self.error(token, "Invalid block tag: '%s'" % command) django.template.TemplateSyntaxError: Invalid block tag: 'endfor' The code is: function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('date', 'time'); data.addColumn('number', 'x'); data.addColumn('number', 'y'); data.addColumn('number', 'z'); data.addRows([ {% for d in datos &} [new Date({{d.instante|date:"Y, m, d, H, i, s"}}), {{d.x}}, {{d.y}}, {{d.z}}] {% if not forloop.last %},{% endif %} ]); {% endfor %} var chart = new google.visualization.AnnotatedTimeLine(document.getElementById('chart_div')); chart.draw(data, {displayAnnotations: true}); } Thanks you all!

    Read the article

  • How do the performance characteristics of jQuery selectors differ from those of CSS selectors?

    - by Moss
    I came across Google's Page Speed add-on for Firebug yesterday. The page about using efficient CSS selectors said to not use overqualified selectors, i.e. use #foo instead of div#foo. I thought the latter would be faster but Google's saying otherwise, and who am I to go against that? So that got me wondering if the same applied to jQuery selectors. This page I found the link to on SO says I should use $("div#foo"), which is what I was doing all along, since I thought that things would speed up by limiting the selector to match div elements only. But is it really better than writing $("#foo") like Google's saying for CSS selectors, or do CSS versus jQuery element matching work in different ways and I should stick with $("div#foo")?

    Read the article

  • Getting started with zxing on android

    - by amitlicht
    Hi I'm trying to add zxing to my project (add a button which calls the scanner upon press). I found this: http://groups.google.com/group/android-developers/browse_thread/thread/788eb52a765c28b5 and of course the zxing homesite: http://code.google.com/p/zxing/, but still couldn't figure out what to include in the project classpath to make it all work! as for now, I copied the classes in the first link to my project (with some package name changes), and it runs but crashes after pressing the button and trying to install the barcode scanner. some code: private void setScanButton(){ Button scan = (Button) findViewById(R.id.MainPageScanButton); scan.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { IntentIntegrator.initiateScan(MyActivity.this); } }); } resulting error (from logcat): 06-13 15:26:01.540: ERROR/AndroidRuntime(1423): Uncaught handler: thread main exiting due to uncaught exception 06-13 15:26:01.560: ERROR/AndroidRuntime(1423): android.content.ActivityNotFoundException: No Activity found to handle Intent { act=android.intent.action.VIEW dat=market://search?q=pname:com.google.zxing.client.android } Ideas?

    Read the article

  • Target iframe Raphael Js

    - by user299343
    Ive been trying to target iframe using Raphael JS heres some sample code var c = paper.circle(10, 10, 10); c.attr({href: "http://google.com/", target: "top"}); v var t = paper.text(250, 50, "Raphaël\nkicks\nbutt!"); t.attr({href: "http://google.com/", target: "_blank"}); Also..cant get href to work with text ar t = paper.text(50, 50, "Raphaël\nkicks\nbutt!"); t.attr({href: "http://google.com/", target: "_blank"});

    Read the article

  • ajax call through cas

    - by manu1001
    I need to write a google gadget that reads feeds from google groups. Trouble is I'm making an ajax call to retrieve the feeds and our google apps domain is protected by CAS (central authentication service). So, I'm getting a 400 bad request on making the call. I suspect that the browser is not sending the cookie when making ajax call. How do I ensure that the cookie is also sent with the ajax call? OR if that's not supposed to be the problem, what do i need to do?

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • Is gchart safe to use?

    - by Paul Tomblin
    The home page for gchart, a client side charting add-in for Google Web Toolkit (GWT), has a long screed about how the project's only maintainer thinks his Google account has been hacked and because of that he will be "disavowing/abandoning my own project and Google account". Does that mean the project is an orphan? Is somebody taking it over? There is always a risk on basing your project on somebody else's code because they may stop supporting it or abandon it during your project's life time, but it seems to me that with the fast evolution of Java and GWT, using gchart in a new project may be a big mistake. Am I right?

    Read the article

  • Reverse geocode without using MKReverseGeocoder

    - by SpH1nX
    Hi guys, I'm trying to detect current user address using MKReverseGeocoder passing coordinates obtained via CLLocation class. Reading MKReverseGeocoder Class Reference I noticed that The Google terms of service require that the reverse geocoding service be used in conjunction with a Google map; take this into account when designing your application's user interface. so I'm wondering if (and eventually how) can I reverse geocode user current location on iPhone OS SDK 3.1.3. I thought using Google Maps API but the EULA has the same obligation. Yahoo Maps API is even worse and Microsoft one aren't free.

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • TFS 2010 SDK: Integrating Twitter with TFS Programmatically

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS API,Integrate Twitter TFS,TFS Programming,ALM,TwitterSharp   Friends at ‘Twitter Sharp’ have created a wonderful .net API for twitter. With this blog post i will try to show you a basic TFS – Twitter integration scenario where i will retrieve the Team Project details programmatically and then publish these details on my twitter page. In future blogs i will be demonstrating how to create a windows service to capture the events raised by TFS and then publishing them in your social eco-system. Download Working Demo: Integrate Twitter - Tfs Programmatically   1. Setting up Twitter API Download Tweet Sharp from => https://github.com/danielcrenna/tweetsharp  Before you can start playing around with this, you will need to register an application on twitter. This is because Twitter uses the OAuth authentication protocol and will not issue an Access token unless your application is registered with them. Go to https://dev.twitter.com/ and register your application   Once you have registered your application, you will need ‘Customer Key’, ‘Customer Secret’, ‘Access Token’, ‘Access Token Secret’ 2. Connecting to Twitter using the Tweet Sharp API Create a new C# windows forms project and add reference to ‘Hammock.ClientProfile’, ‘Newtonsoft.Json’, ‘TweetSharp’ Add the following keys to the App.config (Note – The values for the keys below are in correct and if you try and connect using them then you will get an authorization failure error). Add a new class ‘TwitterProxy’ and use the following code to connect to the TwitterService (Read more about OAuthentication - http://dev.twitter.com/pages/auth) using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Configuration;using TweetSharp; namespace WindowsFormsApplication2{ public class TwitterProxy { private static string _hero; private static string _consumerKey; private static string _consumerSecret; private static string _accessToken; private static string _accessTokenSecret;  public static TwitterService ConnectToTwitter() { _consumerKey = ConfigurationManager.AppSettings["ConsumerKey"]; _consumerSecret = ConfigurationManager.AppSettings["ConsumerSecret"]; _accessToken = ConfigurationManager.AppSettings["AccessToken"]; _accessTokenSecret = ConfigurationManager.AppSettings["AccessTokenSecret"];  return new TwitterService(_consumerKey, _consumerSecret, _accessToken, _accessTokenSecret); } }} Time to Tweet! _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); _twitterService.SendTweet("Hello World"); SendTweet will return the TweetStatus, If you do not get a 200 OK status that means you have failed authentication, please revisit the Access tokens. --RESPONSE: https://api.twitter.com/1/statuses/update.json HTTP/1.1 200 OK X-Transaction: 1308476106-69292-41752 X-Frame-Options: SAMEORIGIN X-Runtime: 0.03040 X-Transaction-Mask: a6183ffa5f44ef11425211f25 Pragma: no-cache X-Access-Level: read-write X-Revision: DEV X-MID: bd8aa0abeccb6efba38bc0a391a73fab98e983ea Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 Content-Type: application/json; charset=utf-8 Date: Sun, 19 Jun 2011 09:35:06 GMT Expires: Tue, 31 Mar 1981 05:00:00 GMT Last-Modified: Sun, 19 Jun 2011 09:35:06 GMT Server: hi Vary: Accept-Encoding Content-Encoding: Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked   3. Integrate with TFS In my blog post Connect to TFS Programmatically i have in depth demonstrated how to connect to TFS using the TFS API. 1: // Update the AppConfig with the URI of the Team Foundation Server you want to connect to, Make sure you have View Team Project Collection Details permissions on the server 2: private static string _myUri = ConfigurationManager.AppSettings["TfsUri"]; 3: private static TwitterService _twitterService = null; 4:   5: private void button1_Click(object sender, EventArgs e) 6: { 7: lblNotes.Text = string.Empty; 8:   9: try 10: { 11: StringBuilder notes = new StringBuilder(); 12:   13: _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); 14:   15: _twitterService.SendTweet("Hello World"); 16:   17: TfsConfigurationServer configurationServer = 18: TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); 19:   20: CatalogNode catalogNode = configurationServer.CatalogNode; 21:   22: ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( 23: new Guid[] { CatalogResourceTypes.ProjectCollection }, 24: false, CatalogQueryOptions.None); 25:   26: // tpc = Team Project Collection 27: foreach (CatalogNode tpcNode in tpcNodes) 28: { 29: Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); 30: TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); 31:   32: notes.AppendFormat("{0} Team Project Collection : {1}{0}", Environment.NewLine, tpc.Name); 33: _twitterService.SendTweet(String.Format("http://Lunartech.codeplex.com - Connecting to Team Project Collection : {0} ", tpc.Name)); 34:   35: // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' 36: var tpNodes = tpcNode.QueryChildren( 37: new Guid[] { CatalogResourceTypes.TeamProject }, 38: false, CatalogQueryOptions.None); 39:   40: foreach (var p in tpNodes) 41: { 42: notes.AppendFormat("{0} Team Project : {1} - {2}{0}", Environment.NewLine, p.Resource.DisplayName,  "This is an open source project hosted on codeplex"); 43: _twitterService.SendTweet(String.Format(" Connected to Team Project: '{0}' – '{1}' ", p.Resource.DisplayName, "This is an open source project hosted on codeplex")); 44: } 45: } 46: notes.AppendFormat("{0} Updates posted on Twitter : {1} {0}", Environment.NewLine, @"http://twitter.com/lunartech1"); 47: lblNotes.Text = notes.ToString(); 48: } 49: catch (Exception ex) 50: { 51: lblError.Text = " Message : " + ex.Message + (ex.InnerException != null ? " Inner Exception : " + ex.InnerException : string.Empty); 52: } 53: }   The extensions you can build integrating TFS and Twitter are incredible!   Share this post :

    Read the article

  • Displaying a pdf file located on a http server from mobile phone

    - by JCasso
    I have some pdf files located on a http server: Like: http://domain.com/files/file1.pdf http://domain.com/files/file1.pdf http://domain.com/files/file1.pdf I need to display these files on a mobile application using java me. I tried to display them by opening Google Docs Viewer with platformRequest. However it seems Google Docs Viewer uses ajax and many mobile browsers does not support it. Is there an alternative for "Google Docs Viewer" for mobile devices ? Or is there a better solution for this problem ?

    Read the article

  • How can I verify that javascript and images are being cached?

    - by BestPractices
    I want to verify that the images, css, and javascript files that are part of my page are being cached by my browser. I've used Fiddler and Google Page Speed and it's unclear whether either is giving me the information I need. Fiddler shows the HTTP 304 response for images, css, and javascript which should tell the browser to use the cached copy. Google Page Speed shows the 304 response but doesn't show a Transfer Size of Zero, instead it shows the full file size of the resource. Note also, I have seen Google Page Speed report a 200 response but then put the word (cache) next to the 200 (so Status is 200 (cache)), which doesnt make a lot of sense. Any other suggestions as to how I can verify whether the server is sending back images, css, javascript after they've been retrieved and cached by a previous page hit?

    Read the article

  • wget return downloaded filename

    - by Matthew
    I'm using wget in a php script and need to get the name of the file downloaded. For example, if I try <?php system('/usr/bin/wget -q --directory-prefix="./downloads/" http://www.google.com/'); ?> I will get a file called index.html in the downloads directory. The page will not always be google though, so I need to find out the name of the file that was downloaded. I'd like to have something like this: <?php //Does not work: $filename = system('/usr/bin/wget -q --directory-prefix="./downloads/" http://www.google.com/'); //$filename should contain "index.html" ?>

    Read the article

  • Sql simple query

    - by Josemalive
    Hello, I have the following table Persons_Companies that shows a relation between persons and companies knowns by these persons: PersonID | CompanyID 1 1 2 1 2 2 3 2 4 2 Imaging that company 1="Google" and company 2 is ="Microsoft", i would like to know the query to have the following result: PersonID | Microsoft | Google 1 0 1 2 1 1 3 1 0 4 1 0 Until this moment i have something similar: select PersonID, case when CompanyID=1 then 1 else 0 end as Google, case when EmpresaID=2 then 1 else 0 end as Microsoft from Persons_Companies My problem is with the persons that knows both companies, i cant imagine how could this query be. Could you give me a hand? Thanks in advance. Best Regards. Josema.

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • using Awk inside Perl script

    - by papoyan
    My first question in stackoverflow! I'm having trouble using the following code inside my perl script, any advise is really appreciated, how to correct the syntax? # If I execute in bash, it's working just fine bash$ whois google.com | egrep "\w+([._-]\w)*@\w+([._-]\w)*\.\w{2,4}" |awk ' {for (i=1;i<=NF;i++) {if ( $i ~ /[[:alpha:]]@[[:alpha:]]/ ) { print $i}}}'|head -n1 [email protected] #----------------------------------- #but this doesn't work bash$ ./email.pl google.com awk: {for (i=1;i<=NF;i++) {if ( ~ /[[:alpha:]]@[[:alpha:]]/ ) { print }}} awk: ^ syntax error # Here is my script bash$ cat email.pl ####\#!/usr/bin/perl $input = lc shift @ARGV; $host = $input; my $email = `whois $host | egrep "\w+([._-]\w)*@\w+([._-]\w)*\.\w{2,4}" |awk ' {for (i=1;i<=NF;i++) {if ( $i ~ /[[:alpha:]]@[[:alpha:]]/ ) { print $i}}}'|head -1`; print my $email; bash$ Thank you in advance !

    Read the article

  • Best option for using the GData APIs on Android?

    - by nyenyec
    What's the least painful and most size efficient way to use the Google Data APIs in an Android application? After a few quick searches t seems that there is an android-gdata project on Google Code that seems to be the work of a single author. I didn't find any documentation for it and don't even know if it's production ready yet. An older option, the com.google.wireless.gdata package seems to have been removed from the SDK. It's still available in the GIT repository. Before I invest too much time with either approach I'd like to know which is the best supported and least painful.

    Read the article

  • Why isn't my bundle getting passed?

    - by NickTFried
    I'm trying to pass a bundle of two values from a started class to my landnav app, but according to the debug nothing is getting passed, does anyone have any ideas why? package edu.elon.cs.mobile; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; public class PointEntry extends Activity{ private Button calc; private EditText longi; private EditText lati; private double longid; private double latd; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.pointentry); calc = (Button) findViewById(R.id.coorCalcButton); calc.setOnClickListener(landNavButtonListener); longi = (EditText) findViewById(R.id.longitudeedit); lati = (EditText) findViewById(R.id.latitudeedit); } private void startLandNav() { Intent intent = new Intent(this, LandNav.class); startActivityForResult(intent, 0); } private OnClickListener landNavButtonListener = new OnClickListener() { @Override public void onClick(View arg0) { Bundle bundle = new Bundle(); bundle.putDouble("longKey", longid); bundle.putDouble("latKey", latd); longid = Double.parseDouble(longi.getText().toString()); latd = Double.parseDouble(lati.getText().toString()); startLandNav(); } }; } This is the class that is suppose to take the second point package edu.elon.cs.mobile; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapController; import com.google.android.maps.MapView; import com.google.android.maps.MyLocationOverlay; import com.google.android.maps.Overlay; import android.content.Context; import android.graphics.drawable.Drawable; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.location.Location; import android.location.LocationManager; import android.os.Bundle; import android.util.Log; import android.widget.EditText; import android.widget.TextView; public class LandNav extends MapActivity{ private MapView map; private MapController mc; private GeoPoint myPos; private SensorManager sensorMgr; private TextView azimuthView; private double longitudeFinal; private double latitudeFinal; double startTime; double newTime; double elapseTime; private MyLocationOverlay me; private Drawable marker; private GeoPoint finalPos; private SitesOverlay myOverlays; public LandNav(){ startTime = System.currentTimeMillis(); } public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.landnav); Bundle bundle = this.getIntent().getExtras(); if(bundle != null){ longitudeFinal = bundle.getDouble("longKey"); latitudeFinal = bundle.getDouble("latKey"); } azimuthView = (TextView) findViewById(R.id.azimuthView); map = (MapView) findViewById(R.id.map); mc = map.getController(); sensorMgr = (SensorManager) getSystemService(Context.SENSOR_SERVICE); LocationManager lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); Location location = lm.getLastKnownLocation(LocationManager.GPS_PROVIDER); int longitude = (int)(location.getLongitude() * 1E6); int latitude = (int)(location.getLatitude() * 1E6); finalPos = new GeoPoint((int)(latitudeFinal*1E6), (int)(longitudeFinal*1E6)); myPos = new GeoPoint(latitude, longitude); map.setSatellite(true); map.setBuiltInZoomControls(true); mc.setZoom(16); mc.setCenter(myPos); marker = getResources().getDrawable(R.drawable.greenmarker); marker.setBounds(0,0, marker.getIntrinsicWidth(), marker.getIntrinsicHeight()); me = new MyLocationOverlay(this, map); myOverlays = new SitesOverlay(marker, myPos, finalPos); map.getOverlays().add(myOverlays); } @Override protected boolean isRouteDisplayed() { return false; } @Override protected void onResume() { super.onResume(); sensorMgr.registerListener(sensorListener, sensorMgr.getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_UI); me.enableCompass(); me.enableMyLocation(); //me.onLocationChanged(location) } protected void onPause(){ super.onPause(); me.disableCompass(); me.disableMyLocation(); } @Override protected void onStop() { super.onStop(); sensorMgr.unregisterListener(sensorListener); } private SensorEventListener sensorListener = new SensorEventListener() { @Override public void onAccuracyChanged(Sensor arg0, int arg1) { // TODO Auto-generated method stub } private boolean reset = true; @Override public void onSensorChanged(SensorEvent event) { newTime = System.currentTimeMillis(); elapseTime = newTime - startTime; if (event.sensor.getType() == Sensor.TYPE_ORIENTATION && elapseTime > 400) { azimuthView.setText(Integer.toString((int) event.values[0])); startTime = newTime; } } }; }

    Read the article

  • gdata youtube api 302 'The document has moved'

    - by zalew
    I'm trying to get YouTube feeds with the python gdata library. Authentication features work ok, yt_service.ProgrammaticLogin() works, generating subauth token works, etc., but when I try to get some feeds (GetMostRecentVideoFeed, GetYouTubeVideoEntry, even GetFeed, and any other) I get: RequestError: {'status': 302, 'body': '<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">\n<TITLE>302 Moved</TITLE></HEAD><BODY>\n<H1>302 Moved</H1>\nThe document has moved\n<A HREF="http://www.google.com">here</A>.\r\n</BODY></HTML>\r\n', 'reason': 'Redirect received, but redirects_remaining <= 0'} 302 to 'google.com'??? I've even tried to do something from the google online tutorials and I get the same error. What's going on?

    Read the article

  • Making GWT application crawlable by a search engine.

    - by Philippe Beaudoin
    I want to use the #! token to make my GWT application crawlable, as described here: http://code.google.com/web/ajaxcrawling/ There is a GWT sample app available online that uses this, for example: http://gwt.google.com/samples/Showcase/Showcase.html#!CwRadioButton Will serve the following static webpage to the googlebot: http://gwt.google.com/samples/Showcase/Showcase.html?_escaped_fragment_=CwRadioButton I want my GWT app to do something similar. In short, I'd like to serve a different flavor of the page whenever the _escaped_fragment_ parameter is found in the URL. What should I modify in order for the server to serve something else (a static page, or a page dynamically generated through a headless browser like HTML Unit)? I'm guessing it could be the web.xml file, but I'm not sure. (Note: I thought of checking the Showcase app provided with the GWT SDK, but unfortunately it doesn't seem to support serving static files on _escaped_fragment_ and it doesn't use the #! token..)

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >