Search Results

Search found 3192 results on 128 pages for 'implicit conversion'.

Page 62/128 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Generating Debt Leads by Using Mobile Marketing

    If you are a debt leads generation company and you're using these methods of generating leads, you should know that the least amount of fields for the consumer to fill out, the higher the lead conversion. Every field will reduce the change of the user clicking on the submit button.

    Read the article

  • Building DVD discs with Bombono

    <b>LWN.net:</b> "DVD authoring can be a deceptively tricky business. Though it can take some time to convert source video to the proper MPEG format, the technical requirements for the various regions are standardized, so it is at least possible to configure the conversion tools and do it right, once and for all."

    Read the article

  • Localising Your SEO - How to Do This Successfully

    Getting to the top of the search engines for some businesses and niches can be easier than others, but sometimes it is better to concentrate your Search Engine Optimisation efforts on local searches, especially if your products are more suited to a local base. Not only is this easier than going for the national searches, it means your conversion is going to usually be higher due to the fact that local people generally prefer to purchase or use services from local businesses.

    Read the article

  • SEO Content to Deliver Value to Your Customers

    When most people talk about SEO content they refer to stuff that is written merely to please the search engines but when the readers go through it, they find nothing but a whole lot of crap. With such content you might be able to get visitors to your site however it will be very difficult to keep them on the site for long because once they read the badly organized content they'll go away. This would mean that you conversion rate would be pretty low, which ultimately signals that you are not making enough of sales.

    Read the article

  • Problem with XLSX file on Office 2003 with Compatibility Pack

    - by MadBoy
    I've this machine which has Office 2003 installed with Compatibility Pack. User received one file which she has to work with in XLSX format (file has to stay in that format so options to save it as XLS can be skipped). When she opens the file from Desktop or any other location it gives an error like "Cannot find the file in following location" and in the background it starts Converting XLSX file to XLS. In the end Excel is opened up with some random file name X000008.xls (Read Only) which is just 1 to 1 conversion of the XLSX document. However if I go directly to Excel and use File / Open and try to open the XLSX file no error or conversion is done. The file is writable and can be saved in XLSX format. The file name is simple like This_file something.XLSX, I've even tried to remove all spaces but it gave no better results. Anyone has any recommendations? So far I have done: 1. Uninstalled Compatibility Pack and installed it again. Tried opening it without any other updates and the error still pops out. In progress: I am now running all possible updates of office (like 5 of them) which i suppos won't fix anything (as they were applied when I've uninstalled compatibility pack). Next in run will be (not yet done) installing Windows XP SP3. What other things I can do?

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • Simple P2V help from Linux to Windows

    - by Ke.
    I have two OS's installed on different drives in my PC. One linux (Centos 5.4) and one windows 7. Its getting tiresome to constantly have to stop and restart the PC when I want to use either OS. I would very much like to use Windows 7 as my host OS and access my linux OS from within Windows. However, im having trouble deciphering exactly how to do this (many of the articles seem confusing and a bit overkill) From what i have seen its possible to use VMWare converter to convert the physical linux image to a virtual image so that I can use it in windows. As im having problems understanding how this is done, I would really appreciate a step by step guide (for a newbie), or any simple tutorials that you can point me at. Some questions beforehand: 1) My linux image is around 80gb, do i need to take this into consideration? The linux drive is around 180gb in total. All my other drives are NTFS non writeable in linux (as I use them in windows and ntfs is dodgy in linux), so probably not possible to move the image over to my ntfs drives 2) Can I just zip the linux files up somehow and transfer it to windows to create the p2v? 3) Is it possible to do the P2V conversion while I am logged into windows. I can see the actual linux drive loaded in disk manager, but windows doesnt read linux file systems so im confused as to how to access the linux drive if this is possible. 4) Or will i need to do the whole p2v conversion inside linux? Cheers, any help is much appreciated Ke (a confused p2v newbie)

    Read the article

  • Simple P2V help from Linux to Windows

    - by Ke
    Hi, I have two OS's installed on different drives in my PC. One linux (Centos 5.4) and one windows 7. Its getting tiresome to constantly have to stop and restart the PC when I want to use either OS. I would very much like to use Windows 7 as my host OS and access my linux OS from within Windows. However, im having trouble deciphering exactly how to do this (many of the articles seem confusing and a bit overkill) From what i have seen its possible to use VMWare converter to convert the physical linux image to a virtual image so that I can use it in windows. As im having problems understanding how this is done, I would really appreciate a step by step guide (for a newbie), or any simple tutorials that you can point me at. Some questions beforehand: 1) My linux image is around 80gb, do i need to take this into consideration? The linux drive is around 180gb in total. All my other drives are NTFS non writeable in linux (as I use them in windows and ntfs is dodgy in linux), so probably not possible to move the image over to my ntfs drives 2) Can I just zip the linux files up somehow and transfer it to windows to create the p2v? 3) Is it possible to do the P2V conversion while I am logged into windows. I can see the actual linux drive loaded in disk manager, but windows doesnt read linux file systems so im confused as to how to access the linux drive if this is possible. 4) Or will i need to do the whole p2v conversion inside linux? Cheers, any help is much appreciated Ke (a confused p2v newbie)

    Read the article

  • Is ffmpeg incorrectly interpreting .aif files?

    - by marue
    Being on an Ubuntu 10.04 server i installed the ffmpeg packages with apt. ffmpeg is working afterwards, and doing as it should. Almost. For testing purposes i uploaded a few audiofiles. One of them, an aif file, is not being correctly interpreted. While on my workhorse (Mac SnowLeopard) ffmpeg tells the format as Stream #0.0: Audio: pcm_s24be, 44100 Hz, 2 channels, s32, 2116 kb/s my Ubuntu server says it is: Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s which is the wrong bitdepth. Ubuntu then fails to convert the file with the error message [pcm_s24be @ 0xcd4b580]invalid PCM packet Error while decoding stream #0.0 which certainly is not true. The file is perfectly valid. Are there any know issues for ffmpeg interpreting the aif format? How can i find out which version of the aif-codec ffmpeg is using? Any ideas how to approach this issue? ffprobe output: FFprobe version SVN-r20090707, Copyright (c) 2007-2009 Stefano Sabatini libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 built on Jan 20 2010 00:13:01, gcc: 4.4.3 20100116 (prerelease) Input #0, aiff, from 'testfile.aif': Duration: 00:00:04.00, start: 0.000000, bitrate: 2117 kb/s Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s update 2: Forcing the conversion with -sample_fmt s32 doesn't change anything. Strange thing is: Even without using -sample_fmt s32 i just realized that the conversion is working and creates valid audiofiles. There just is the error message from above.

    Read the article

  • (Win7) Gets stuck with ~1% CPU. Especially with multithreading

    - by meow
    Windows 7 32 bit, up to date, Intel i7 860. (For some reason the company runs 32bit Windows everywhere.) I tried to update all motherboard drivers etc. as far as possible. I have a performance issue with a machine which appears in connection with multithreading (or so I think). As an example (and where I most often see it, but it appears on other programs as well): ProteoWizard is a file conversion tool for mass spectrometry files. I can add a list of files and it will attempt to process up to 8 files in parallel (quadcore x 2 threads/core). If I choose 1 to 6 files, I start the process and it goes straight through. If I have =7 files in the queue, conversion goes to ~20%, then gets stuck for 15 seconds, then continues again, always in "chunks" of a few % before getting stuck again. During the time the process is stuck, CPU is at 1%. RAM is not limiting, it is maybe at 70% or so and not going up. I don't get the same problem on other, even slower machines. The computer gets also stuck at 1% CPU doing nothing on other occasions, but for multithreading it is most frequent. Where should I look for the problem?

    Read the article

  • How to change password on RAR archive w/o modifying arch. files attributes (modified/created)?

    - by Larry78
    How do I change the password of an .RAR archive, without changing the date/time attributes of the files in the archive? Unfortunately you can't directly change the password of the archive with WinRAR, you have to extract the files, and then make a new archive with the new password. So the created/modified attributes of the files in the archive get changed. I know you can manually change the attributes of a file with available utilities - but there are hundreds of files in the archive, each with unique attributes, so it would take a very long time to "fix" each file before re-archiving it. I'm using WinRAR 3.51, the last free version. Windows XP Pro SP3. Update: I don't care if the output is a .RAR file or a ZIP file IZArc4.1 will convert the RAR to a ZIP, and it keeps the dates. The problem is it compresses the file - there isn't a "store" option, and setting the default to store in the main configuration doesn't effect conversions. The RAR contains uncompressed files. None of these other archiving programs will even do a conversion. A couple claim to, or try to, but the errors returned indicate a very lousy application. So far I've tried PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, and WinShrink (from downloads.cnet.com). WinRAR gives the error "skipping encryped archive" when I try the conversion. It asks for the password first, and I know it's right, as I opened the archive, and I can read/view all the files in it. It works on non-encrypted files.

    Read the article

  • Getting time in ubuntu

    - by user2578666
    include #include <stdio.h> int GetTime() { struct timespec tsp; clock_gettime(CLOCK_REALTIME, &tsp); //Call clock_gettime to fill tsp fprintf(stdout, "time=%d.%d\n", tsp.tv_sec, tsp.tv_nsec); fflush(stdout); } I am trying to compile the above code but it keeps throwing the error: time.c: In function ‘GetTime’: time.c:12:4: warning: implicit declaration of function ‘clock_gettime’ [-Wimplicit-function-declaration] time.c:12:18: error: ‘CLOCK_REALTIME’ undeclared (first use in this function) time.c:12:18: note: each undeclared identifier is reported only once for each function it appears in time.c:14:4: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘__time_t’ [-Wformat] time.c:14:4: warning: format ‘%d’ expects argument of type ‘int’, but argument 4 has type ‘long int’ [-Wformat] I have tried compiling with -lrt flag and -std=gnu99. Nothing works.

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Thread-safety in Cocos2d-iPhone?

    - by Malax
    After tinkering a bit with cocos2d, I discovered that there is no classic game loop and everything is more-or-less event driven. I guess I can wrap my head around that, no problem. But I cannot find anything about thread safety. Say, I schedule something to occur every two seconds, which Thread will run the code? Given that I cannot find anything about that, I guess there is just one Cocos2d Thread and everything will be fine. Nevertheless, this implicit assumption does not give me a good feeling. Knowing is better than guessing. ;-) Can anyone shed some light onto that topic?

    Read the article

  • Dynamically apply and change Theme with the Silverlight Toolkit

    This tutorial shows you how to Theme your Silverlight app and allow the user to switch Theme dynamically.   Introduction Its been a while since Silverlight Toolkit has Theming support, but with the April 2010 version and the new features of Silverlight 4 it is now easier than ever to apply those themes. What you need to know: ImplicitStyleManager was removed from the Toolkit, this is because implicit styles are now supported in Silverlight 4. The Toolkit has now a Theme control...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Dynamically apply and change Theme with the Silverlight Toolkit

    This tutorial shows you how to Theme your Silverlight app and allow the user to switch Theme dynamically.   Introduction Its been a while since Silverlight Toolkit has Theming support, but with the April 2010 version and the new features of Silverlight 4 it is now easier than ever to apply those themes. What you need to know: ImplicitStyleManager was removed from the Toolkit, this is because implicit styles are now supported in Silverlight 4. The Toolkit has now a Theme control...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about <a href="http://www.4guysfromrolla.com/articles/042110-1.aspx">Visual Basic 2010's language enhancements</a>, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were

    Read the article

  • What liability concerns do advertising vendors raise, and how can I address them?

    - by Beofett
    One of the websites I administer wants to provide free advertising in the form of direct links to vendors at an event they are running. Up until now, there has been no advertising whatsoever on the site (or any of our other sites). The site is for a for-profit business. The idea of implicit endorsement of any vendors we advertise has been raised, which brought up the question of what we need to do, if anything, to protect ourselves from any potential problems such endorsement might create. I know that many sites have clauses in their Terms of Service that state that (in a nutshell) they are not responsible for any problems or grievances between the visitors to the site and any vendor advertised or linked. Are there other steps that a website typically takes when considering advertising, such as getting the advertiser to provide some sort of certification that their ad will not violate any trademarks or copyrighted material?

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • Silverlight 4 Training Course

    - by guybarrette
    A Silverlight 4 training course is now available on Channel 9.  Here’s the course description: The Silverlight 4 Training Course includes a whitepaper explaining all of the new Silverlight 4 RC features, several hands-on-labs that explain the features, and a 8 unit course for building business applications with Silverlight 4. The business applications course includes 8 modules with extensive hands on labs as well as 25 accompanying videos that walk you through key aspects of building a business application with Silverlight. Key aspects in this course are working with numerous sandboxed and elevated out of browser features, the new RichTextBox control, implicit styling, webcam, drag and drop, multi touch, validation, authentication, MEF, WCF RIA Services, right mouse click, and much more! You can download it here var addthis_pub="guybarrette";

    Read the article

  • Make the JavaScript Test Pass

    Add code on the commented line: var f = function () { var value = // ??? return f.sum = (f.sum || 0) + value;} ... to make the following QUnit test pass: test("Running sum", function () { equals(f(3), 3); equals(f(3), 6); equals(f(4), 10); jQuery([1, 2, 3]).each(f); equals(f(0), 16); }); Possible Answer It's a goofy scenario, but one possible solution uses a technique you'll see frequently inside today's JavaScript libraries. First, we'll need to use the implicit arguments parameter inside...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do you handle increasingly long compile times when working with templates?

    - by Ghita
    I use Visual Studio 2012 and he have cases where we added templates parameters to a class "just" in order to introduce a "seam point" so that in unit-test we can replace those parts with mock objects. How do you usually introduce seam points in C++: using interfaces and/or mixing based on some criteria with implicit interfaces by using templates parameters also ? One reason to ask this is also because when compiling sometimes a single C++ file (that includes templates files, that could also include other templates) results in an object file being generated that takes in the order of around 5-10 seconds on a developer machine. VS compiler is also not particularly fast on compiling templates as far as I understand, and because of the templates inclusion model (you practically include the definition of the template in every file that uses it indirectly and possibly re-instantiate that template every time you modify something that has nothing to do with that template) you could have problems with compile times (when doing incremental compiling). What are your ways of handling incremental(and not only) compile time when working with templates (besides a better/faster compiler :-)).

    Read the article

  • Editing XML Literals Embedded Expressions in Visual Basic 2010 (Avner Aharoni)

    The implicit line continuation feature in Visual Basic 2010 provided an opportunity to improve the code editing experience in XML literals embedded expressions. In Visual Studio 2008, pressing Enter inside an embedded expression would result in the cursor being positioned to the left of the end embedded expression tag. In Visual Studio 2010, pressing Enter inserts a newline for the cursor, and the end embedded expression tag moves to the line below. This minimizes the number of key strokes needed...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >