Search Results

Search found 59230 results on 2370 pages for 'character set'.

Page 420/2370 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • How to Highlight a Row in Excel Using Conditional Formatting

    - by Erez Zukerman
    Conditional formatting is an Excel feature you can use when you want to format cells based on their content. For example, you can have a cell turn red when it contains a number lower than 100. But how do you highlight an entire row? If you’ve never used Conditional Formatting before, you might want to look at Using Conditional Cell Formatting in Excel 2007. It’s one version back, but the interface really hasn’t changed much. But what if you wanted to highlight other cells based on a cell’s value? The screenshot above shows some codenames used for Ubuntu distributions. One of these is made up; when I entered “No” in the “Really” column, the entire row got different background and font colors. To see how this was done, read on.How To Make a Youtube Video Into an Animated GIFHTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear Monitors

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • SEO domain name advice

    - by Dominykas Mostauskis
    I'm starting a website, that is meant for a non-English region, using an alphabet that is a bit different than that of English. Current plan is as follows. The website name, and the domain name, will be in the local language (not English); however, domain name will be spelled in the English alphabet, while the website's title will be the same word(s), but spelled properly with accents. E.g.: 'www.litterat.fr' and 'Littérat'. Does the difference between domain name and website name character use influence the site's SEO? Is it better, SEO-wise, to choose a name that can be spelled the same way in the English alphabet? From my experience, when searching online, invariably, the English alphabet is used, no matter the language, so people will still be searching 'litterat' (without accents and such).

    Read the article

  • Super Secret Door Top Stash Hides Your Flash Drive and Cash [DIY]

    - by Jason Fitzpatrick
    Everyone needs a bit of spy-guy fun in their lives (or at least a way to hide your Sailor Moon photo collection from everyone). This clever and extremely well hidden DIY stash puts your contraband inside a door. At Make Projects, the user-contributed project blog at Make magazine, Sean Michael Ragan shares a really stealthy way to hide stuff–stashing it inside the top of the door stop. You’ll need some power tools like a drill, files, and a countersink, as well as a cigar tube for the body of your hidden drop. When you’re done you’ll have an extremely well hidden stash in a place that next to nobody would think to look–inside the top of a door. Hit up the link for a picture-filled step-by-step guide to building your own stash. Door Top Stash [Make Projects] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • windows phone deserialization json

    - by user2042227
    I have a weird issue. so I am making a few calls in my app to a webservice, which replies with data. However I am using a token based login system, so the first time the user enters the app I get a token from the webservice to login for that specific user and that token returns only that users details. The problem I am having is when the user changes I need to make the calls again, to get the new user's details, but using visual studio's breakpoint debugging, it shows the new user's token making the call however the problem is when the json is getting deserialized, it is as if it still reads the old data and deserializes that, when I exit my app with the new user it works fine, so its as if it is reading cached values, but I have no idea how to clear it? I am sure the new calls are being made and the problem lies with the deserializing, but I have tried clearing the values before deserializing them again, however nothing works. am I missing something with the json deserializer, how van I clear its cached values? here I make the call and set it not to cache so it makes a new call everytime: client.Headers[HttpRequestHeader.CacheControl] = "no-cache"; var token_details = await client.DownloadStringTaskAsync(uri); and here I deserialize the result, it is at this section the old data gets shown, so the raw json being shown inside "token_details" is correct, only once I deserialize the token_details, it shows the wrong data. deserialized = JsonConvert.DeserializeObject(token_details); and the class I am deserializing into is a simple class nothing special happening here, I have even tried making the constructor so that it clears the values each time it gets called. public class test { public string status { get; set; } public string name{ get; set; } public string birthday{ get; set; } public string errorDes{ get; set; } public test() { status = ""; name= ""; birthday= ""; errorDes= ""; } } uri's before making the calls: {https://whatever.co.za/token/?code=BEBCg==&id=WP7&junk=121edcd5-ad4d-4185-bef0-22a4d27f2d0c} - old call "UBCg==" - old reply {https://whatever.co.za/token/?code=ABCg==&id=WP7&junk=56cc2285-a5b8-401e-be21-fec8259de6dd} - new call "UBCg==" - new response which is the same response as old call as you can see i did attach a new GUID everytime i make the call, but then the new uri is read before making the downloadstringtaskasync method call, but it returns with the old data

    Read the article

  • Collision detection, stop gravity

    - by Scott Beeson
    I just started using Gamemaker Studio and so far it seems fairly intuitive. However, I set a room to "Room is Physics World" and set gravity to 10. I then enabled physics on my player object and created a block object to match a platform on my background sprite. I set up a Collision Detection event for the player and the block objects that sets the gravity to 0 (and even sets the vspeed to 0). I also put a notification in the collision event and I don't get that either. I have my key down and key up events working well, moving the player left and right and changing the sprites appropriately, so I think I understand the event system. I must just be missing something simple with the physics. I've tried making both and neither of the objects "solid". Pretty frustrating since it looks so easy. The player starting point is directly above the block object in the grid and the player does fall through the block. I even made the block sprite solid red so I could see it (initially it was invisible, obviously).

    Read the article

  • Adventures in Windows 8: Placing items in a GridView with a ColumnSpan or RowSpan

    - by Laurent Bugnion
    Currently working on a Windows 8 app for an important client, I will be writing about small issues, tips and tricks, ideas and whatever occurs to me during the development and the integration of this app. When working with a GridView, it is quite common to use a VariableSizedWrapGrid as the ItemsPanel. This creates a nice flowing layout which will auto-adapt for various resolutions. This is ideal when you want to build views like the Windows 8 start menu. However immediately we notice that the Start menu allows to place items on one column (Smaller) or two columns (Larger). This switch happens through the AppBar. So how do we implement that in our app? Using ColumnSpan and RowSpan When you use a VariableSizedWrapGrid directly in your XAML, you can attach the VariableSizedWrapGrid.ColumnSpan and VariableSizedWrapGrid.RowSpan attached properties directly to an item to create the desired effect. For instance this code create this output (shown in Blend but it runs just the same): <VariableSizedWrapGrid ItemHeight="100" ItemWidth="100" Width="200" Orientation="Horizontal"> <Rectangle Fill="Purple" /> <Rectangle Fill="Orange" /> <Rectangle Fill="Yellow" VariableSizedWrapGrid.ColumnSpan="2" /> <Rectangle Fill="Red" VariableSizedWrapGrid.ColumnSpan="2" VariableSizedWrapGrid.RowSpan="2" /> <Rectangle Fill="Green" VariableSizedWrapGrid.RowSpan="2" /> <Rectangle Fill="Blue" /> <Rectangle Fill="LightGray" /> </VariableSizedWrapGrid> Using the VariableSizedWrapGrid as ItemsPanel When you use a GridView however, you typically bind the ItemsSource property to a collection, for example in a viewmodel. In that case, you want to be able to switch the ColumnSpan and RowSpan depending on properties on the item. I tried to find a way to bind the VariableSizedWrapGrid.ColumnSpan attached property on the GridView’s ItemContainerStyle template to an observable property on the item, but it didn’t work. Instead, I decided to use a StyleSelector to switch the GridViewItem’s style. Here’s how: First I added my two GridViews to my XAML as follows: <Page.Resources> <local:MainViewModel x:Key="Main" /> <DataTemplate x:Key="DataTemplate1"> <Grid Background="{Binding Brush}"> <TextBlock Text="{Binding BrushCode}" /> </Grid> </DataTemplate> </Page.Resources> <Page.DataContext> <Binding Source="{StaticResource Main}" /> </Page.DataContext> <Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}" Margin="20"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <GridView ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="150" ItemWidth="150" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> <GridView Grid.Column="1" ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="100" ItemWidth="100" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> </Grid> The MainViewModel looks like this: public class MainViewModel { public IList<Item> Items { get; private set; } public MainViewModel() { Items = new List<Item> { new Item { Brush = new SolidColorBrush(Colors.Red) }, new Item { Brush = new SolidColorBrush(Colors.Blue) }, new Item { Brush = new SolidColorBrush(Colors.Green), }, // And more... }; } } As for the Item class, I am using an MVVM Light ObservableObject but you can use your own simple implementation of INotifyPropertyChanged of course: public class Item : ObservableObject { public const string ColSpanPropertyName = "ColSpan"; private int _colSpan = 1; public int ColSpan { get { return _colSpan; } set { Set(ColSpanPropertyName, ref _colSpan, value); } } public SolidColorBrush Brush { get; set; } public string BrushCode { get { return Brush.Color.ToString(); } } } Then I copied the GridViewItem’s style locally. To do this, I use Expression Blend’s functionality. It has the disadvantage to copy a large portion of XAML to your application, but the HUGE advantage to allow you to change the look and feel of your GridViewItem everywhere in the application. For example, you can change the selection chrome, the item’s alignments and many other properties. Actually everytime I use a ListBox, ListView or any other data control, I typically copy the item style to a resource dictionary in my application and I tweak it. Note that Blend for Windows 8 apps is automatically installed with every edition of Visual Studio 2012 (including Express) so you have no excuses anymore not to use Blend :) Open MainPage.xaml in Expression Blend by right clicking on the MainPage.xaml file in the Solution Explorer and selecting Open in Blend from the context menu. Note that the items do not look very nice! The reason is that the default ItemContainerStyle sets the content’s alignment to “Center” which I never quite understood. Seems to me that you rather want the content to be stretched, but anyway it is easy to change.   Right click on the GridView on the left and select Edit Additional Templates, Edit Generated Item Container (ItemContainerStyle), Edit a Copy. In the Create Style Resource dialog, enter the name “DefaultGridViewItemStyle”, select “Application” and press OK. Side note 1: You need to save in a global resource dictionary because later we will need to retrieve that Style from a global location. Side note 2": I would rather copy the style to an external resource dictionary that I link into the App.xaml file, but I want to keep things simple here. Blend switches in Template edit mode. The template you are editing now is inside the ItemContainerStyle and will govern the appearance of your items. This is where, for instance, the “checked” chrome is defined, and where you can alter it if you need to. Note that you can reuse this style for all your GridViews even if you use a different DataTemplate for your items. Makes sense? I probably need to think about writing another blog post dedicated to the ItemContainerStyle :) In the breadcrumb bar on top of the page, click on the style icon. The property we want to change now can be changed in the Style instead of the Template, which is a better idea. Blend is not in Style edit mode, as you can see in the Objects and Timeline pane. In the Properties pane, in the Search box, enter the word “content”. This will filter all the properties containing that partial string, including the two we are interested in: HorizontalContentAlignment and VerticalContentAlignment. Set these two values to “Stretch” instead of the default “Center”. Using the breadcrumb bar again, set the scope back to the Page (by clicking on the first crumb on the left). Notice how the items are now showing as squares in the first GridView. We will now use the same ItemContainerStyle for the second GridView. To do this, right click on the second GridView and select Edit Additional Templates, Edit Generate Item Container, Apply Resource, DefaultGridViewItemStyle. The page now looks nicer: And now for the ColumnSpan! So now, let’s change the ColumnSpan property. First, let’s define a new Style that inherits the ItemContainerStyle we created before. Make sure that you save everything in Blend by pressing Ctrl-Shift-S. Open App.xaml in Visual Studio. Below the newly created DefaultGridViewItemStyle resource, add the following style: <Style x:Key="WideGridViewItemStyle" TargetType="GridViewItem" BasedOn="{StaticResource DefaultGridViewItemStyle}"> <Setter Property="VariableSizedWrapGrid.ColumnSpan" Value="2" /> </Style> Add a new class to the project, and name it MainItemStyleSelector. Implement the class as follows: public class MainItemStyleSelector : StyleSelector { protected override Style SelectStyleCore(object item, DependencyObject container) { var i = (Item)item; if (i.ColSpan == 2) { return Application.Current.Resources["WideGridViewItemStyle"] as Style; } return Application.Current.Resources["DefaultGridViewItemStyle"] as Style; } } In MainPage.xaml, add a resource to the Page.Resources section: <local:MainItemStyleSelector x:Key="MainItemStyleSelector" /> In MainPage.xaml, replace the ItemContainerStyle property on the first GridView with the ItemContainerStyleSelector property, pointing to the StaticResource we just defined. <GridView ItemsSource="{Binding Items}" ItemTemplate="{StaticResource DataTemplate1}" VerticalAlignment="Top" ItemContainerStyleSelector="{StaticResource MainItemStyleSelector}"> <GridView.ItemsPanel> <ItemsPanelTemplate> <VariableSizedWrapGrid ItemHeight="150" ItemWidth="150" /> </ItemsPanelTemplate> </GridView.ItemsPanel> </GridView> Do the same for the second GridView as well. Finally, in the MainViewModel, change the ColumnSpan property on the 3rd Item to 2. new Item { Brush = new SolidColorBrush(Colors.Green), ColSpan = 2 }, Running the application now creates the following image, which is what we wanted. Notice how the green item is now a “wide tile”. You can also experiment by creating different Styles, all inheriting the DefaultGridViewItemStyle and using different values of RowSpan for instance. This will allow you to create any layout you want, while leaving the heavy lifting of “flowing the layout” to the GridView control. What about changing these values dynamically? Of course as we can see in the Start menu, it would be nice to be able to change the ColumnSpan and maybe even the RowSpan values at runtime. Unfortunately at this time I have not found a good way to do that. I am investigating however and will make sure to post a follow up when I find what I am looking for!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • 2D Platform Game Jumping

    - by Bradley Kreuger
    I'm currently writing a game in XNA for fun which uses C#. I have got my sprites loaded and when the character moves right he looks like he is running right and when he moves left he looks like he is running left. I been looking everywhere for a good coding example for how to create a jumping ability. I have read all the physics stuff that I can stand and it doesn't help when I can't figure out how to use say space bar to jump yet can't keep them from using space just jump again until they land.

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • OUM is Flexible and Scalable

    - by user535886
    Flexible and Scalable Traditionally, projects have been focused on satisfying the contents of a requirements document or rigorously conforming to an existing set of work products. Often, especially where iterative and incremental techniques have not been employed, these requirements may be inaccurate, the previous deliverables may be flawed, or the business needs may have changed since the start of the project. Fitness for business purpose, derived from the Dynamic Systems Development Method (DSDM) framework, refers to the focus of delivering necessary functionality within a required timebox. The solution can be more rigorously engineered later, if such an approach is acceptable. Our collective experience shows that applying fit-for-purpose criteria, rather than tight adherence to requirements specifications, results in an information system that more closely meets the needs of the business. In OUM, this principle is extended to refer to the execution of the method processes themselves. Project managers and practitioners are encouraged to scale OUM to be fit-for-purpose for a given situation. It is rarely appropriate to execute every activity within OUM. OUM provides guidance for determining the core set of activities to be executed, the level of detail targeted in those activities and their associated tasks, and the frequency and type of end user deliverables. The project workplan should be developed from this core. The plan should then be scaled up, rather than tailored down, to the level of discipline appropriate to the identified risks and requirements. Even at the task level, models and work products should be completed only to the level of detail required for them to be fit-for-purpose within the current iteration or, at the project level, to suit the business needs of the enterprise and to meet the contractual obligations that govern the project. OUM provides well defined templates for many of its tasks. Use of these templates is optional as determined by the context of the project. Work products can easily be a model in a repository, a prototype, a checklist, a set of application code, or, in situations where a high degree of agility is warranted, simply the tacit knowledge contained in the brain of an analyst or practitioner. For further reading on agility, see Balancing Agility and Discipline: A guide fro the Perplexed.

    Read the article

  • Contract Work - Lessons Learned

    - by samerpaul
    I thought I would write a post of a different nature today, but still relevant to the tech world. I do a lot of contract jobs myself and really enjoy it. It's nice to keep jumping from project to project, and not having to go to an office or keep regular hours, etc. I really enjoy it. I have learned a lot in the past few years of doing it (both from experience and from help given to me from others, and the internet) so I thought I'd share some of that knowledge/experience today.So here's my own personal "lesson's learned" that hopefully will help you if you find yourself doing contract work:Should I take the job?Ok, so this is the first step. Assuming you were given sufficient information about what they want, then you should really think about what you're capable of doing and whether or not you should take this job. Personally, my rule is, if I know it's possible, I'll say yes, even if I don't yet know how to do it. That's because the internet is such a great help, it would be rare to run into an issue that you can't figure out with some help. So if your clients are asking for something that you don't yet know how to program, but you know you can do it on the platform then go for it. How else are you going to learn?Use this rule with some limitation, however. If you're really lacking the expertise or foundation in something, then unless you have tons of time to complete the project, then I wouldn't say yes. For example, I haven't personally done any 3d/openGL programming yet so I wouldn't say yes to a project that extensively uses it. OK, so I want the job, but how much do I charge?This part can be tricky. There is no set formula really, but I have some tips for pricing that will hopefully give you a better idea on how to confidently ask your price and have them accept. Here are some personal guidelinesHow much time do you have to complete the project? If it's shorter than average, then charge more. You can even make a subtle note about this (or not so subtle if they still don't get it.) If it seems too short of a time (i.e. near impossible to complete), be sure to say that. It looks bad to promise a time that you can't keep--and it makes it less likely for them to return to you for work.Your Hourly rate: How long have you been working in that language? Do you have existing projects to back you up? Or previous contacts that can vouch for your work? Are there very few people with your particular skill set? All of these things will lend themselves to setting an hourly rate. I'd also try out a quick google search of what your line of work is, to see what the industry standard is at that point in time.I wouldn't price too low, because you want to make your time worth it. You also want them to feel like they're paying for quality work (assuming you can deliver it :) ). Finally, think about your client. If it's a small business, then don't price it too high if you want the job. If it's an enterprise (like a Fortune company), then don't be afraid to price higher. They have the budget for it.Fixed price: If they want a fixed price project, then you need to think about how many hours it will take you to complete it and multiply it by the hourly rate you set for yourself. Then, honestly, I would add 10-20% on top of that. Why? Because nothing ever works exactly how you want it to. There are lots of times that something "trivial" is way harder than it should be, or something that "should work" doesn't for hours and it eats away at your hourly rate. I can't count the number of times I encountered a logical bug that took away an entire's day work because debuggers don't help in those cases. By adding that padding in, it's still OK to have those days where you don't get as much done as you want. And another useful tip: Depending on your client, and the scope, you most likely want to set that you both sign off on a specification sheet before doing any work, and that any changes will result in a re-evaulation of the price. This is to help protect you from being handed a huge new addition to the project half-way in, without any extra payment.Scope of project: Finally, is it a huge project? Is it really small/fast? This affects how much your client will be willing to pay. If it sounds big, they will be willing to pay more for it. If it seems really small, then you won't be able to get away with a large asking price (as easily).Ok, I priced it, now what?So now that you have the price, you want to make sure it feels justified to your client. I never set a price before I can really think about everything. For example, if you're still in your introduction phase, and they want a price, don't give one! Just comment that you will send them a proposal sheet with all the features outlined, and a price for everything. You don't want to shout out a low number and then deliver something that is way higher. You also don't want to shock them with a big number before they feel like they are getting a great product.Make up a proposal document in a word editor. Personally, I leave the price till the very end. Why? Because by the time they reach the end, you've already discussed all the great features you plan to implement, and how it's the best product they'll ever use, etc etc...so your price comes off as a steal! If you hit them up front with a price, they will read through the document with a negative bias. Think about those commercials on TV. They always go on about their product, then at the end, ask "What would you pay for something like this? $100? $50? How about $20!!". This is not by accident.Scenario: I finished the job way earlier than expectedYou have two options then. You can either polish the hell out of the application, and even throw in a few bonus features (assuming they are in-line with the customer's needs) or you can sit and wait on it until you near your deadline. Why don't you want to turn it in too early? Because you should treat that extra time as a surplus. If you said it is going to take you 3 weeks, and it took you only 1, you have a surplus of 2 weeks. I personally don't want to let them know that I can do a 3 week project in 1 week. Why not? Because that may not always be the case! I may later have a 3 week project that takes all 3 weeks, but if I set a precedent of delivering super early, then the pressure is on for that longer project. It also makes it harder to quote longer times if you keep delivering too early.Feel free to deliver early, but again, don't do it too early. They may also wonder why they paid you for 3 weeks of work if you're done in 1. They may further wonder if the product sucks, or what is wrong with it, if it's done so early, etc.I would just polish the application. Everyone loves polish in their applications. The smallest details are what make an application go from "functional" to "fantastic". And since you are still delivering on time, then they are still going to be very happy with you.Scenario: It's taking way too long to finish this, and the deadline is nearing/here!So this is not a fun scenario to be in, but it'll happen. Sometimes the scope of the project gets out of hand. The best policy here is OPENNESS/HONESTY. Tell them that the project is taking longer than expected, and give a reasonable time for when you think you'll have it done. I typically explain it in a way that makes it sound like it isn't something that I did wrong, but it's just something about the nature of the project. This really goes for any scenario, to be honest. Just continue to stay open and communicative about your progress. This doesn't mean that you should email them every five minutes (unless they want you to), but it does mean that maybe every few days or once a week, give them an update on where you're at, and what's next. They'll be happy to know they are paying for progress, and it'll make it easier to ask for an extension when something goes wrong, because they know that you've been working on it all along.Final tips and thoughts:In general, contract work is really fun and rewarding. It's nice to learn new things all the time, as mandated by the project ,and to challenge yourself to do things you may not have done before. The key is to build a great relationship with your clients for future work, and for recommendations. I am always very honest with them and I never promise something I can't deliver. Again, under promise, over deliver!I hope this has proved helpful!Cheers,samerpaul

    Read the article

  • Usage of repository between EF model and code consumer

    - by jim
    I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example: // This is a class I created for modeling the item as is. public class RealItem { public string Name { get; set; } public Bitmap Image { get; set; } } public abstract class BaseRepository { //using Unity (http://unity.codeplex.com) to inject the dependancy of entity context. [Dependency] public Context { get; set; } } public calss ItemRepository : BaseRepository { public List<Items> Select() { IEnumerable<Items> items = from item in Context.Items select item; List<RealItem> lst = new List<RealItem>(); foreach(itm in items) { MemoryStream stream = new MemoryStream(itm.Image); Bitmap image = (Bitmap)Image.FromStream(stream); RealItem ritem = new RealItem{ Name=item.Name, Image=image }; lst.Add(ritem); } return lst; } } Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example: public IQueryable<object> Select { return from q in base.Context select q; } as you can see no behavior is added to the system by their approach, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?

    Read the article

  • A SelfHosted WCF Service over Basic HTTP Binding doesn't support more than 1000 concurrent requests

    - by Krishnan
    I have self hosted a WCF Service over BasicHttpBinding consumed by an ASMX Client. I'm simulating a concurrent user load of 1200 users. The service method takes a string parameter and returns a string. The data exchanged is less than 10KB. The processing time for a request is fixed at 2 seconds by having a Thread.Sleep(2000) statement. Nothing additional. I have removed all the DB Hits / business logic. The same piece of code runs fine for 1000 concurrent users. I get the following error when I bump up the number to 1200 users. System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at WCF.Throttling.Client.Service.Function2(String param) This exception is often reported on DataContract mismatch and large data exchange. But never when doing a load test. I have browsed enough and have tried most of the options which include, Enabled Trace & Message log on server side. But no errors logged. To overcome Port Exhaustion MaxUserPort is set to 65535, and TcpTimedWaitDelay 30 secs. MaxConcurrent Calls is set to 600, and MaxConcurrentInstances is set to 1200. The Open, Close, Send and Receive Timeouts are set to 10 Minutes. The HTTPWebRequest KeepAlive set to false. I have not been able to nail down the issue for the past two days. Any help would be appreciated. Thank you.

    Read the article

  • C# Simple Twitter Update

    - by mroberts
    For what it's worth a simple twitter update. using System; using System.IO; using System.Net; using System.Text; namespace Server.Actions { public class TwitterUpdate { public string Body { get; set; } public string Login { get; set; } public string Password { get; set; } public override void Execute() { try { //encode user name and password string creds = Convert.ToBase64String(Encoding.ASCII.GetBytes(string.Format("{0}:{1}", this.Login, this.Password))); //encode tweet byte[] tweet = Encoding.ASCII.GetBytes("status=" + this.Body); //setup request HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://twitter.com/statuses/update.xml"); request.Method = "POST"; request.ServicePoint.Expect100Continue = false; request.Headers.Add("Authorization", string.Format("Basic {0}", creds)); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = tweet.Length; //write to stream Stream reqStream = request.GetRequestStream(); reqStream.Write(tweet, 0, tweet.Length); reqStream.Close(); //check response HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //... } catch (Exception e) { //... } } } }

    Read the article

  • How to change keyboard layout?

    - by swedishhh
    I'm on ubuntu 12.04. Recently I bought a cheap apple style bluetooth keyboard. It pairs OK. I paired it with the current 102 key still attached. Anyway I noticed that the character mapping is incorrect. Most keys do not type anything - some keys on the right (k, l, ;') etc give numbers, but that's about it. So I rebooted, with 102 kbd unattached, and the bluetooth keyboard on, ready to connect. After boot at the login screen, the bluetooth keyboard had paired. I typed my password, and it logged in fine!! However after the user login was complete it reverted to the broken behaviour. A glance at the layout chart shows ubuntu thinks I still have the 102 layout, even though it remained disconnected. Any ideas? Thanks, Dave

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Constrained/penalized distance function

    - by sigma.z.1980
    Assume a character is located on a n by n grid and has to reach a certain entry on that grid. Its current position is (x1,y1). Also on the same grid is an enemy with coordinates (x2,y2). Each step algorithm randomly generates new candidate locations for the hero (if there are k candidates then there is a kx2 matrix of new potential locations. What I need is some distance objective function to compare the candidates. I'm currently using d1 - c * d2, where d1 is distance to the objective (measure in terms of number of pixels for each axis), d2 is distance to the enemy and c is some coefficient (this is very much like a set-up for Lagrangian). It's not working very well though. I'd be quite keen to learn how what constrained distance function are used for similar cases. Any suggestions are very much appreciated.

    Read the article

  • Relative encapsulation design

    - by taher1992
    Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes get properties. The set properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the set work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example.

    Read the article

  • Windows Phone 7 Design Template

    Expression Blend is a wonderful design environment for WP7 (Windows Phone 7) but for quickly visualizing a concept nothing beats Illustrator! I am excited about WP7 and decided that having a solid .ai template would prove invaluable. Some of the details of the WP7 UI Design and Interaction Guide are a bit fuzzy (literally) but I was able to generate some useful layout guides, character styles, and symbols. While the template does not cover every aspect of the guide I think it is a good launching point; if you find it useful and extend it please share your updates (I created the template in CS4, if you have problems in earlier versions let me know).Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • XSF-FO intellisense and national languages with Apache FOP

    - by Lukasz Kurylo
    Some time ago I showed how to get an intellisense and how to configure the FO.NET to acquire national characters inside the generated pdf files. Due to the limitations that I mensioned in my previous post, I started playing with the Apache FOP. In this post I want to show, how to acquire the same result as I showed in the two posts related to FO.NET.   Intellisense   To get the intellisense from the XSL-FO templates set the xsi:schemaLocation the same way I showed it in this post. The only diffrence to FO.NET is that, during generating the document by the code I showed last time we will get an exception:   org.apache.fop.fo.ValidationException: Invalid property encountered on "fo:root": xsi:schemaLocation (See position 6:11)   Fortunatelly there is a very easy way to resolve this without removing the entire attribute along with the intellisense. Add to the FopFactory the ignoreNamespace by:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.ignoreNamespace(http://www.w3.org/2001/XMLSchema-instance);   Notice that, the url specified in this method this is a namespace for the xmlns:xsi namespace, not xsi.schemaLocation.   Fonts / national characters   This point is a little dfferent to acquire, but not more complicated that it was with FO.NET. To set the fonts in Apache FOP 1.0, we need a configuration file. A sample one can be get from the directory where we unpacked the fop binaries, from conf subdirectory. There is a file called fop.xconf. We must copy this file to our solution. In the simplest way, in the <fonts> tag we can add  <auto-detect/>. Thanks to this, FOP will index all fonts available on the installed operating system. There probably should be no problem, if we have a http handler or a WCF Service on the server that serves the generated pdf documents. In this situation we can use all available fonts on this server.   To use this config file, we must set a path to it:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.setUserConfig(new File("fop.xconf"));

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • Setting XSL-FO XML Schema in Visual Studio

    - by Lukasz Kurylo
    I'm playing lately with an XSL-FO for generating a pdf documents. XSL-FO has a long list of available tags and attributes, which for a new guy who want to create a simple document is a nightmare to find a proper one. Fortunatelly we can set an schema for XSL-FO, so will result in acquire a full intellisense in VS. For a simple *.fo file, we can set the path to the schema directly in file: <?xml version="1.0" encoding="utf-8"?> <fo:root       xmlns:fo="http://www.w3.org/1999/XSL/Format"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation=" http://www.w3.org/1999/XSL/Format http://www.xmlblueprint.com/documents/fop.xsd"> ...   We can of course use the build in VS XML Schemas selector. To use it, we must copy the schema file to the Schemas catalog (defaut path for VS2012 is C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas). Then we can go to Properties of the opened xml/xslt file and set the new added schema to file:                 From now, we should have an enable intellisense as shown below: .

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >