Search Results

Search found 18035 results on 722 pages for 'location bar'.

Page 135/722 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • BI Publisher at Collaborate 2010

    - by mike.donohue
    Noelle and I are heading to Collaborate 2010 next week. There are over two dozen sessions on BI Publisher including a Hands On Lab (see below). Very excited to see what our customers and partners will be presenting and how they are using BI Publisher to get better reports and reduce costs. My only regret is that many sessions are scheduled at the same time so I won't get to see all of them. Noelle and I will be presenting the following: Monday, April 19 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher Session: 227 Location: Reef F By: Mike 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher Session: 73170 Location: South Seas Ballroom J By: Noelle 3:45 pm - 4:45 pm Oracle Business Intelligence Publisher Hands On Lab (1) Session: 217 Location: Palm D By: Noelle and Mike Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices Session: 218 Location: Palm D By: Noelle and Mike We will also be at the BI Technology demo pod in the exhibt hall so please stop by and say hello. All BI Publisher related Sessions Sunday, April 18 2:00 pm - 2:50 pm Customizing your Invoices in a Flash! 3:00 pm - 3:50 pm BI Publisher SIG Meeting - Part 1 4:00 pm - 4:50 pm BI Publisher SIG Meeting - Part 2 Monday, April 19 8:00 am - 9:00 am XML Publisher and FSG for Beginners 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm Bay Ballroom A What it Takes to Make Your Business Intelligence Implementation a Success 2:30 pm - 3:30 pm XML Publisher-More Than Just Form Letters 3:45 pm - 4:45 pm JD Edwards EnterpriseOne Reporting and Batch Discussions presented by Technology SIG 3:45 pm - 4:45 pm Hands On Lab: Oracle Business Intelligence Publisher (1) Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices 8:00 am - 9:00 am Creating XML Publisher Documents with PeopleCode 10:30 am - 11:30 am Moving to BI Publisher, Now What? Automated Fax and Email from Oracle EBS 2:00 pm - 3:00 pm Smart Reporting in Oracle Financials Release 12.1 2:00 pm - 3:00 pm Custom Check Printing Framework using XML Publisher 2:00 pm - 3:00 pm BI Publisher and Oracle BI for JD Edwards Wednesday, April 21 8:00 am - 9:00 am XML Publisher Tips for PeopleTools 10:30 am - 11:30 am JD Edwards World - Technical Upgrade Considerations 10:30 am - 11:30 am Data Visualization Best Practices: Know how to design and improve your BI & EPM reports, dashboards, and queries 10:30 am - 11:30 am Oracle BIEE End-to-End 1:00 pm - 2:00 pm Empower JD Edwards Users with Oracle BI Publisher for Ad Hoc Reporting 1:00 pm - 2:00 pm BIP and JD Edwards World - Good Stuff! 2:15 pm - 3:15 pm Proven Strategies for Increasing ROI with PeopleSoft HCM 4:00 pm - 5:00 pm Using Oracle BI Delivers to Send Reports to JD Edwards Users Thursday, April 22 9:45 am - 10:45 am PeopleSoft Recruiting Enhancements You Can Use 9:45 am - 10:45 am Reducing Cost with Oracle's BI Publisher Note (1) the Hands On Lab was not showing in the joint scheduler as of this posting but, it is definitely ON.

    Read the article

  • Automapper: Handling NULL members

    - by PSteele
    A question about null members came up on the Automapper mailing list.  While the problem wasn’t with Automapper, investigating the issue led to an interesting feature in Automapper. Normally, Automapper ignores null members.  After all, what is there really to do?  Imagine these source classes: public class Source { public int Data { get; set; } public Address Address { get; set; } }   public class Destination { public string Data { get; set; } public Address Address { get; set; } }   public class Address { public string AddressType { get; set; } public string Location { get; set; } } And imagine a simple mapping example with these classes: Mapper.CreateMap<Source, Destination>();   var source = new Source { Data = 22, Address = new Address { AddressType = "Home", Location = "Michigan", }, };   var dest = Mapper.Map<Source, Destination>(source); The variable ‘dest’ would have a complete mapping of the Data member and the Address member. But what if the source had no address? Mapper.CreateMap<Source, Destination>();   var source = new Source { Data = 22, };   var dest = Mapper.Map<Source, Destination>(source); In that case, Automapper would just leave the Destination.Address member null as well.  But what if we always wanted an Address defined – even if it’s just got some default data?  Use the “NullSubstitute” option: Mapper.CreateMap<Source, Destination>() .ForMember(d => d.Address, o => o.NullSubstitute(new Address { AddressType = "Unknown", Location = "Unknown", }));   var source = new Source { Data = 22, };   var dest = Mapper.Map<Source, Destination>(source); Now, the ‘dest’ variable will have an Address defined with a type and location of “Unknown”.  Very handy! Technorati Tags: .NET,Automapper,NULL

    Read the article

  • EXALYTICS - Unable to run Summary Advisor when BI Admin Client Tool is installed separately

    - by Ahmed Awan
    Unable to launch Summary Advisor when BI Admin Developer Client tool (version 11.1.1.6.0) is separately installed. In Windows Event application log, the error is pointing to missing AggrAdvisor.xml file. The file AggrAdvisor.xml is missing in BI client install location. Workaround: Download file AggrAdvisor.xml and copy to following location will resolve this issue: <your drive>:\Program Files\Oracle Business Intelligence Enterprise Edition Plus Client\oraclebi\orahome\bifoundation\server\locale\l_en\

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Enable Thumbnail Previews for Firefox in Windows 7 Taskbar

    - by Asian Angel
    Are you tired of waiting for the official activation of Taskbar Thumbnail Previews in Firefox? See how easy it is to enable them now with a simple about:config hack. Note: We have briefly covered this before but present it here in a more detailed format. Before For our example we opened all of the websites in the HTG Network in tabs… When hovering over the Firefox Icon in the Taskbar, you only see the one thumbnail. There are two things in particular to notice here: 1.) The Tab Bar for Firefox is displayed with all four tabs visible in the Thumbnail Preview  2.) The “Taskbar Icon” itself is displaying as singular with no “fanned edge” on the right side. Hack the About:Config Settings To get the Thumbnail Previews working you will need to make a modification in the about:config settings. Type about:config in the Address Bar and press Enter. Unless you have previously disabled the warning you will see this message after pressing Enter. Click on the I promise! Button to finish entering the settings. In the Filter Address Bar either type or copy and paste the following about:config entry: browser.taskbar.previews.enable After you enter that in, you should see the entry listing as shown here. At this point there are two methods that you can choose to alter the entry. The first method is to right click on the entry and select Toggle and the second method is to double click on the entry. Both work equally well…choose the method that you like best. Once the about:config entry has been changed, you will need to restart Firefox for it to take effect. After restarting Firefox on our system the Thumbnail Previews were definitely looking very nice. Notice that the Tab Bar is no longer displayed in the Thumbnail Previews. The Taskbar Icon also had a “fanned edge” indicating that multiple tabs were open. Conclusion If you are tired of waiting for Mozilla to officially activate Taskbar Thumbnail Previews in Firefox, then you can go ahead and start enjoying them now. For more great Firefox 3.6.x about:config hacks read our article here. Similar Articles Productive Geek Tips Vista Style Popup Previews for Firefox TabsDisable IE 8 Thumbnail Previews on Windows 7 TaskbarIncrease the size of Taskbar Preview Thumbnails in Windows 7Workaround for Vista Taskbar Thumbnail Previews Not Showing CorrectlyDisable Thumbnail Previews in Windows 7 or Vista Explorer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

  • Using commands with ApplicationBarMenuItem and ApplicationBarButton in Windows Phone 7

    - by Laurent Bugnion
    Unfortunately, in the current version of the Windows Phone 7 Silverlight framework, it is not possible to attach any command on the ApplicationBarMenuItem and ApplicationBarButton controls. These two controls appear in the Application Bar, for example with the following markup: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar x:Name="MainPageApplicationBar"> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Text="Add City" /> <shell:ApplicationBarMenuItem Text="Add Country" /> </shell:ApplicationBar.MenuItems> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.video.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.settings.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.refresh.rest.png" /> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> This code will create the following UI: Application bar, collapsed Application bar, expanded ApplicationBarItems are not, however, controls. A quick look in MSDN shows the following hierarchy for ApplicationBarMenuItem, for example: Unfortunately, this prevents all the mechanisms that are normally used to attach a Command (for example a RelayCommand) to a control. For example, the attached behavior present in the class ButtonBaseExtension (from the Silverlight 3 version of the MVVM Light toolkit) can only be attached to a DependencyObject. Similarly, Blend behaviors (such as EventToCommand from the toolkit’s Extras library) needs a FrameworkElement to work. Using code behind The alternative is to use code behind. As I said in my MIX10 talk, the MVVM police will not take your family away if you use code behind (this quote was actually suggested to me by Glenn Block); the code behind is there for a reason. In our case, invoking a command in the ViewModel requires the following code: In MainPage.xaml: <shell:ApplicationBarMenuItem Text="My Menu 1" Click="ApplicationBarMenuItemClick"/> In MainPage.xaml.cs private void ApplicationBarMenuItemClick( object sender, System.EventArgs e) { var vm = DataContext as MainViewModel; if (vm != null) { vm.MyCommand.Execute(null); } } Conclusion Resorting to code behind to bridge the gap between the View and the ViewModel is less elegant than using attached behaviors, either through an attached property or through a Blend behavior. It does, however, work fine. I don’t have any information if future changes in the Windows Phone 7 Application Bar API will make this easier. In the mean time, I would recommend using code behind instead.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Blank Page: wordpress on nginx+php-fpm

    - by troutwine
    Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. My setup: Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux Configuration Files php-fpm.conf: [global] pid = run/php-fpm/php-fpm.pid error_log = log/php-fpm.log log_level = notice [www] listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0660 user = www group = www pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 nginx.conf: user www; worker_processes 1; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; include /etc/nginx/sites-enabled/*.conf; } /etc/nginx/sites-enabled/blog_sharonrhodes_us.conf: upstream php { server 127.0.0.1:9000; } server { error_log /var/log/nginx/us/sharonrhodes/blog/error.log notice; access_log /var/log/nginx/us/sharonrhodes/blog/access.log; server_name blog.sharonrhodes.us; root /srv/apps/us/sharonrhodes/blog; index index.php; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_intercept_errors on; fastcgi_pass php; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } }

    Read the article

  • SSIS Reporting Pack update

    - by jamiet
    Its been a while since I last posted anything in regard to SSIS Reporting Pack, the most recent release being on 27th May 2012, so here is a short update. There is still lots of work to do on SSIS Reporting Pack; lots more features to add, lots of performance work to be done, and a few bug fixes too. I have also been (fairly) hard at work on a framework to be used in conjunction with SSIS 2012 that I refer to as the Restart Framework (currently residing at http://ssisrestartframework.codeplex.com/). There is still much work to be done on the Restart Framework (not least some useful documentation on how to use it) which is why I haven’t mentioned it publicly before now although I am actively checking in changes. One thing I am considering is amalgamating the two projects into one; this would mean I could build a suite of reports that both work against the SSIS Catalog (what you currently know as “SSIS Reporting Pack”) and also against this Restart Framework thing. No decision has been made as yet though. There have been a number of bug reports and feature suggestions for SSIS Reporting Pack added to the Issue Tracker. Thank you to everyone that has submitted something, rest assured I am not going to ignore them forever; my time is at a premium right now unfortunately due to … well … life… so working on these items isn’t near the top of my priority list. Lastly, I am actively using SSIS Reporting Pack in a production environment right now and I’m happy to report that it is proving to be very useful. One of the reports that I have put a lot of time into is execution executable duration.rdl and its proving very adept at easily identifying bottlenecks in our SSIS 2012 executions: The report allows you to browse through the hierarchy of executables in each execution and each bar represents the duration of each executable in relation to all the other executables; longer bars being a good indication of where problems might lie. The colour of the bar indicates whether it was successful or not (green=success). Hovering over a bar brings up a tooltip showing more information about that executable. Clicking on a bar allows you to compare this particular instance of the executable against other executions. Please do let me know if you are using SSIS Reporting Pack. I would like to hear any anecdotes you might have, good or bad. @Jamiet

    Read the article

  • weather-indicator: networking error: HTTP Error 403: Forbidden?

    - by quanta
    Here're the ~/.cache/indicator-weather.log: [Fetcher] 2012-11-24 11:45:52,619 - DEBUG - Indicator: getWeather for location 'Hanoi, Ha N?i, Vietnam' [Fetcher] 2012-11-24 11:45:52,620 - DEBUG - Indicator: getWeather: updating weather report [Fetcher] 2012-11-24 11:45:52,620 - DEBUG - Location: default weather source 'Google' chosen for 'Hanoi' [Fetcher] 2012-11-24 11:45:53,019 - ERROR - Indicator: networking error: HTTP Error 403: Forbidden [Fetcher] 2012-11-24 11:45:53,020 - DEBUG - Indicator: updateWeather: waiting for 'Cacher' thread to terminate [Fetcher] 2012-11-24 11:45:53,020 - ERROR - Indicator: updateWeather: could not get weather, leaving cached data

    Read the article

  • How can I place a ProgressBar in Android using Cocos 2d?

    - by Laxmipriya
    I want to place a horizontal progress bar in my Android application and I want to change its progress color. I used the following code, but the progress bar is not being displayed. CCProgressTimer progressBar = CCProgressTimer.progress("progressbar.png"); progressBar.setType(kCCProgressTimerTypeHorizontalBarLR); progressBar.setScale(5); progressBar.setAnchorPoint(CGPoint.ccp(0, 0)); progressBar.setPosition(CGPoint.ccp(0,0)); addChild(progressBar);

    Read the article

  • How to Create a Task From an Email Message in Outlook 2013

    - by Lori Kaufman
    If you need to do something related to an email message you received, you can easily create a task from the message in Outlook. A task can be created that contains all the content of the message without requiring you to re-enter the information. Creating a task in Outlook from an email message is different from flagging the message. As it says on Microsoft’s site: “When you flag an email message, the message appears in the To-Do List in Tasks and on the Tasks peek. However, if you delete the message, it also disappears from the To-Do List in Tasks and on the Tasks peek. Flagging a message doesn’t create a separate task.” Using the method described below to create a task from an email message, the task is separate from the message. The original message can be deleted or changed and the related task will not be affected. In Outlook, make sure the Mail section is active. If not, click Mail on the Navigation Bar at the bottom of the Outlook window. Then, click on the message you want to add to a task and drag it to Tasks on the Navigation Bar. A new Task window displays containing the email message and allowing you to enter the subject of the task, the Start and Due dates, Status, Priority, among other settings. When you have specified the settings for the task, click Save & Close in the Actions section of the Task tab. When the Task window closes, the Mail section is still active. If you move your mouse over Tasks on the Navigation Bar, a snippet from the new task displays in a popup window (the Task peek). Click Tasks to go to the Tasks section of Outlook. The To-Do List displays with your newly-added task listed in the middle pane. The right pane displays the details of the task and the contents of the message included in the task (as pictured at the beginning of this article). Click on Tasks to see a complete listing of all your tasks, including the one you just added from your email message. Note that attachments in an email message added to a new task are not copied to the task. You can also create new tasks by dragging contacts, calendar items, and notes to Tasks on the Navigation Bar.     

    Read the article

  • Izenda Reports 6.3 Top 10 Features

    - by gt0084e1
    Izenda 6.3 Top 10 New Features and Capabilities 1. Izenda Maps Add-On The Izenda Maps add-on allows rapid visualization of geographic or geo-spacial data.  It is fully integrated with the the rest of Izenda report package and adds a Maps tab which allows users to add interactive maps to their reports. Contact your representative or [email protected] for limited time discounts. Izenda Maps even has rich drill-down capabilities that allow you to dive deeper with a simple hover (also requires dashboards). 2. Streamlined Pie Charts with "Other" Slices The advanced properties of the Pie Chart now allows you to combine the smaller slices into a single "Other" slice. This reduces the visual complexity without throwing off the scale of the chart. Compare the difference below. 3. Combined Bar + Line Charts The Bar chart now allows dual visualization of multiple metrics simultaneously by adding a line for secondary data. Enabled via AdHocSettings.AllowLineOnBar = true; 4. Stacked Bar Charts The stacked bar chart lets you see a breakdown of a measure based on categorical data.  It is enabled with the following code. AdHocSettings.AllowStackedBarChart = true; 5. Self-Joining Data Sources The self-join features allows for parent-child relationships to be accessed from the Data Sources tab. The same table can be used as a secondary child table within the Report Designer. 6. Report Design From Dashboard View Dashboards now sport both view and design icons to allow quick access to both. 7. Field Arithmetic on Dates Differences between dates can now be used as measures with the arithmetic feature. 8. Simplified Multi-Tenancy Integrating with multi-tenant systems is now easier than ever. The following APIs have been added to facilitate common scenarios. AdHocSettings.CurrentUserTenantId = value; AdHocSettings.SchedulerTenantID = value; AdHocSettings.SchedulerTenantField = "AccountID"; 9. Support For SQL 2008 R2 and SQL Azure Izenda now supports the latest version of Microsoft's database as well as the SQL Azure service. 10. Enhanced Performance and Compatibility for Stored Procedures Izenda now supports more stored procedures than ever and runs them faster too.

    Read the article

  • Pet Peeves with the Windows Phone 7 Marketplace

    - by Bil Simser
    Have you ever noticed how something things just gnaw at your very being. This is the case with the WP7 marketplace, the Zune software, and the things that drive me batshit crazy with a side of fries. To go. I wanted to share. XBox Live is Not the Centre of the Universe Okay, it’s fine that the Zune software has an XBox live tag for games so can see them clearly but do we really need to have it shoved down our throats. On every click? Click on Games in the marketplace: The first thing that it defaults to on the filters on the right is XBox Live: Okay. Fine. However if you change it (say to Paid) then click onto a title when you come back from that title is the filter still set to Paid? No. It’s back to XBox Live again. Really? Give us a break. If you change to any filter on any other genre then click on the selected title, it doesn’t revert back to anything. It stays on the selection you picked. Let’s be fair here. The Games genre should behave just like every other one. If I pick Paid then when I come back to the list please remember that. Double Dipping On the subject of XBox Live titles, Microsoft (and developers who have an agreement with Microsoft to produce Live titles, which generally rules out indie game developers) is double dipping with regards to exposure of their titles. Here’s the Puzzle and Trivia Game section on the Marketplace for XBox Live titles: And here’s the same category filtered on Paid titles: See the problem? Two indie titles while the rest are XBox Live ones. So while XBL has it’s filter, they also get to showcase their wares in the Paid and Free filters as well. If you’re going to have an XBox Live filter then use it and stop pushing down indie titles until they’re off the screen (on some genres this is already the case). Free and Paid titles should be just that and not include XBox Live ones. If you’re really stoked that people can’t find the Free XBox Live titles vs. the paid ones, then create a Free XBox Live filter and a Paid XBox Live filter. I don’t think we would mind much. Whose Trial is it Anyways? You might notice apps in the marketplace with titles like “My Fart App Professional Lite” or “Silicon Lamb Spleen Builder Free”. When you submit and app to the marketplace it can either be free or paid. If it’s a paid app you also have the option to submit it with Trial capabilities. It’s up to you to decide what you offer in the trial version but trial versions can be purchased from within the app so after someone trys out your app (for free) and wants to unlock the Super Secret Obama Spy Ring Level, they can just go to the marketplace from your app (if you built that functionality in) and upgrade to the paid version. However it creates a rift of sorts when it comes to visibility. Some developers go the route of the paid app with a trial version, others decide to submit *two* apps instead of one. One app is the “Free” or “Lite” verions and the other is the paid version. Why go to the hassle of submitting two apps when you can just create a trial version in the same app? Again, visibility. There’s no way to tell Paid apps with Trial versions and ones without (it’s an option, you don’t have to provide trial versions, although I think it’s a good idea). However there is a way to see the Free apps from the Paid ones so some submit the two apps and have the Free version have links to buy the paid one (again through the Marketplace tasks in the API). What we as developers need for visibility is a new filter. Trial. That’s it. It would simply filter on Paid apps that have trial capabilities and surface up those apps just like the free ones. If Microsoft added this filter to the marketplace, it would eliminate the need for people to submit their “Free” and “Lite” versions and make it easier for the developer not to have to maintain two systems. I mean, is it really that hard? Can’t be any more difficult than the XBox Live Filter that’s already there. Location is Everything The last thing on my bucket list is about location. When I launch Zune I’m running in my native location setting, Canada. What’s great is that I navigate to the Travel Tools section where I have one of my apps and behold the splendour that I see: There are my apps in the number 1 and number 4 slot for top selling in that category. I show it to my wife to make up for the sleepless nights writing this stuff and we dance around and celebrate. Then I change my location on my operation system to United States and re-launch Zune. WTF? My flight app has slipped to the 10th spot (I’m only showing 4 across here out of the 7 in Zune) and my border check app that was #1 is now in the 32nd spot! End of celebration. Not only is relevance being looked at here, I value the comments people make on may apps as do most developers. I want to respond to them and show them that I’m listening. The next version of my border app will provide multiple camera angles. However when I’m running in my native Canada location, I only see two reviews. Changing over to United States I see fourteen! While there are tools out there to provide with you a unified view, I shouldn’t have to rely on them. My own Zune desktop software should allow me to see everything. I realize that some developers will submit an app and only target it for some locations and that’s their choice. However I shouldn’t have to jump through hoops to see what apps are ahead of mine, or see people comments and ratings. Another proposal. Either unify the marketplace (i.e. when I’m looking at it show me everything combined) or let me choose a filter. I think the first option might be difficult as you’re trying to average out top selling apps across all markets and have to deal with some apps that have been omitted from some markets. Although I think you could come up with a set of use cases that would handle that, maybe that’s too much work. At the very least, let us developers view the markets in a drop down or something from within the Zune desktop. Having to shut down Zune, change our location, and re-launch Zune to see other perspectives is just too onerous. A Call to Action These are just one mans opinion. Do you agree? Disagree? Feel hungry for a bacon sandwich? Let everyone know via the comments below. Perhaps someone from Microsoft will be reading and take some of these ideas under advisement. Maybe not, but at least let’s get the word out that we really want to see some change. Egypt can do it, why not WP7 developers!

    Read the article

  • How do I publicize a cool bookmarklet?

    - by Malvolio
    I wrote a really cool bookmarklet and now I want to tell everyone. I have absolutely no idea where to go, is there some sort of exchange for these things? In case anyone is curious: I was tired of having to retype URLs from my desktop browser on to my phone browser, with its itsy-bitsy keyboard, so I wrote a bookmarklet that converts the current URL to a QR code, which I can scan in a few seconds. javascript:window.location="http://chart.apis.google.com/chart?chs=250x250&cht=qr&chl="+escape(window.location)

    Read the article

  • Unity desktop became transparent after installing VBox Guest Additions [closed]

    - by Stoune
    I have installed Ubuntu Desktop x64 12.01.1 as guest in VirtualBox. When I installed VirtualBox Guest Additions, Unity desktop (top bar and left program icons bar) becomes invisible. The problem was found in VirtualBox 4.1.20 and still exist in 4.1.22. Reinstalling doesn't help. Maybe someone knows how to overcome this issue? Host Os: Windows 7 x64 Host video driver: Intel Video Driver 8.15.10.24.13(latest) Video: embedded into Intel Q45

    Read the article

  • Collision detection when pathfinding with pathnodes, UDK

    - by Dave Voyles
    I'm trying to create a class that allows my AIController to path find using pathnodes (NOT NavMeshes). It's doing a swell job of going from point to point in a set order (although I would like for it to be a random patrol at some point), but it gets caught up on collision from time to time. I.E. He'll walk the same set path, and when he runs into the blocks in the middle of the map he continues to rub against them until they finish, and continues on his merry way to the next path node. How can I prevent this from happening, or at least have him move from the wall if he does a trace and detects that it is there? It looks like I need to use MoveToward() instead of MoveTo(), as MoveToward allows the pawn to adjust its course during movement. I'm just not sure of how to use those paramters. Mougli has a decent tutorial on it[/URL], but I can't seem to get it to work correctly with my pathnode array. class PathfindingAIController extends UDKBot; var array Waypoints; var int _PathNode; //declare it at the start so you can use it throughout the script var int CloseEnough; simulated function PostBeginPlay() { local PathNode Current; super.PostBeginPlay(); //add the pathnodes to the array foreach WorldInfo.AllActors(class'Pathnode',Current) { Waypoints.AddItem( Current ); } } simulated function Tick(float DeltaTime) { local int Distance; local Rotator DesiredRotation; super.Tick(DeltaTime); if (Pawn != None) { // Smoothly rotate the pawn towards the focal point DesiredRotation = Rotator(GetFocalPoint() - Pawn.Location); Pawn.FaceRotation(RLerp(Pawn.Rotation, DesiredRotation, 3.125f * DeltaTime, true), DeltaTime); } Distance = VSize2D(Pawn.Location - Waypoints[_PathNode].Location); if (Distance <= CloseEnough) { _PathNode++; } if (_PathNode >= Waypoints.Length) { _PathNode = 0; } GoToState('Pathfinding'); } auto state Pathfinding { Begin: if (Waypoints[_PathNode] != None) // make sure there is a pathnode to move to { MoveTo(Waypoints[_PathNode].Location); //move to it `log("STATE: Pathfinding"); } } DefaultProperties { CloseEnough=400 bIsplayer = True }

    Read the article

  • Volume range is narrow in 12.04

    - by Alejandro
    I have upgraded to Ubuntu 12.04 and I have a problem with the volume. The problem is that the volume is 0 when the volume bar is between 0% and 66% more or less, so the actual volume range is only between 66% and 100%. If I open the sound configuration menu, the position in the volume bar that makes the sound begin sounding (66% more or less) is placed where it says "no amplification". Can anyone help me? Thank you.

    Read the article

  • Total Solar Eclipse 13/November/2012

    - by TATWORTH
    On the 13/November/2012 there will be a total Solar Eclipse. The only land from which this will be visible is the northern part of Australia. Details of the Eclipse are at http://eclipse.gsfc.nasa.gov/SEmono/TSE2012/TSE2012.htmlCairns WEBCAM http://www.eclipsecairns.com/Wikipedia Entry http://en.wikipedia.org/wiki/Solar_eclipse_of_November_13,_2012Panasonic see  https://www.facebook.com/PanasonicEclipseLive?ref=stream?Main location channel from Sheraton Mirage Port Douglas Resort. http://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-1 ?Second location channel from Fitzroy Islandhttp://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-2

    Read the article

  • Split Internet Explorer into Dual-Panes

    - by Asian Angel
    If you have a wide screen monitor then you may want to make better use of Internet Explorer’s browser window area. Now you can split the browser window into dual-panes as needed with the IE Split browser plugin. Note: Requires .NET Framework 2.0 or higher (link provided below). IE Split in Action If you are using an older version of this software here is something to keep in mind before upgrading to the 2.0 release. Once you have installed IE Split you will notice a new toolbar added to your browser. As seen here, you can condense it down tightly and access it using the drop-down bar. A closer look at the drop-down bar. Notice the address bar…this will be for the left pane when you split the browser window. Here is our browser split into dual-panes. There are two address bars and two tab/title bars each corresponding to their appropriate pane. It may look slightly backwards at first but is not hard to get used to. A better view of the left pane with the IE Split navigation & title bars showing. Note: The title bar can be hidden if desired. And the right pane. You can also have multiple “split” tabs open if needed. There is nothing quite like getting double the value for the same amount of space. When you no longer need dual-panes open just click on the “x” to close IE Split down. All back to normal again. Conclusion While might not be for everyone this can still be useful for those who need side-by-side access to websites without using multiple separate windows. Links Download IE-Split Download the Microsoft .NET Framework 4 (Standalone Installer) Similar Articles Productive Geek Tips Set Up Multi-Pane Viewing in FirefoxWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer?Split a text file in half (or any percentage) on Ubuntu LinuxMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPMake Ctrl+Tab in Internet Explorer 7 Use Most Recent Order TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like)

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • XNA - Moving Background Calculations

    - by Jesse Emond
    Hi, My question is relatively hard to explain(for me, at least), so I'll go one step at a time and just tell me in the comments if it's not clear enough. So I'm making a "Defend Your Castle" type 2D game, where two players own a castle and create units that will move horizontally to try to destroy the opponent's base. Here's a screenshot of the game: The distance between both castles is much bigger in a real game though, bigger than the screen's width actually. Because the distance is bigger than the screen's width, I had to implement a simple 2D camera: Camera2D, which only holds a Location Vector2 (and I always make sure this camera is within the field area). Then, I just move all the game elements(castles, units, health bars) by that location, so that if a unit is at (5, 0), and the camera's location is (5, 0), then the unit's position will be moved by 5 units to the left, making it (0, 0) on the screen. At first, I simply used a static background with mountains and clouds(yeah, those are supposed to be mountains and clouds). Obviously, this looked awful: when you moved the camera, the background would stay immobile. Instead, I'd like to make a moving background, kind of a "scrolling" one. But rather than making a background with the same width as the distance between the castles, I'd like to make one that is a little bit smaller(but still bigger than the screen's width). I thought this would create an effect of "distance" with the background(but it might just look awful, too). Here's the background I'm testing with: I tried different ways, but none of them seems to work. I tried this: float backgroundFieldRatio = BackgroundTexture.Width / fieldWidth;//find the ratio between the background and the field. float backgroundPositionX = -cam.Location.X * backgroundFieldRatio;//move the background to the left When I run this with fieldWith = 1600, BackgroundTexture.Width = 1500 and while looking at the rightmost area, the background is offset to the left by a too big amount, and we can see the black clear color in the back, as you can see here: I hope I explained properly what I'm trying to achieve. Thank you for your time. Note: I didn't know what to look for on Google, so I thought I'd ask here.

    Read the article

  • SEO best practices for a web feature that uses geolocation by IP Address

    - by Nick
    I'm working on a feature that tailors content based on a geo location lookup by IP address in order to provide information based on the general area where this visitor is from. I'm concerned that content will be interpreted as focused solely on the search engine spider's geo origin when it is indexed. Are there SEO best practices for geo location by ip address features? I appreciate any specific tips or words of wisdom.

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • Apache proxy pass in nginx

    - by summerbulb
    I have the following configuration in Apache: RewriteEngine On #APP ProxyPass /abc/ http://remote.com/abc/ ProxyPassReverse /abc/ http://remote.com/abc/ #APP2 ProxyPass /efg/ http://remote.com/efg/ ProxyPassReverse /efg/ http://remote.com/efg/ I am trying to have the same configuration in nginx. After reading some links, this is what I have : server { listen 8081; server_name localhost; proxy_redirect http://localhost:8081/ http://remote.com/; location ^~/abc/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/abc/; } location ^~/efg/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/efg/; } } I already have the following configuration: server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } location ^~/myAPP { alias path/to/app; index main.html; } location ^~/myAPP/images { alias another/path/to/images autoindex on; } } The idea here is to overcome a same-origin-policy problem. The main pages are on localhost:8080 but we need ajax calls to http://remote.com/abc. Both domains are under my control. Using the above configuration, the ajax calls either don't reach the remote server or get cut off because of the cross origin. The above solution worked in Apache and isn't working in nginx, so I am assuming it's a configuration problem. I think there is an implicit question here: should I have two server declarations or should I somehow merge them into one? EDIT: Added some more information EDIT2: I've moved all the proxy_pass configuration into the main server declaration and changed all the ajax calls to go through port 8080. I am now getting a new error: 502 Connection reset by peer. Wireshark shows packets going out to http://remote.com with a bad IP header checksum.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >