Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 41/1640 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Duplicate Name Exists error from virtual PC with NAT network configuration

    - by Phred
    Whenever I change the network configuration to NAT for a virtual machine running under Virtual PC 2007, I get "A duplicate name exists on the network." error. There is no other machine on my network with this name and running the VM in any other network configuration doesn't cause this error. This seems to be a common problem with Virtual PC 2007 based on a google search but no-one seems to have a solution to it. So far, I've discovered that turning off NETBIOS over TCP causes the problem to go away but I need to join this VM to a domain and you can't do that if NETBIOS over TCP is turned off.

    Read the article

  • Can I set Windows default second-monitor behaviour to "Extend these displays"?

    - by MT_Head
    I travel to multiple offices (and multiple desks in those offices), and whenever possible I plug an external monitor into my laptop. Whenever I plug in a monitor I haven't used before, Windows defaults to "Duplicate these displays" - which messes up the arrangement of icons on my desktop if the external monitor is a different shape from my laptop's monitor. I then select "Extend these displays", and my laptop screen returns to its original shape - but my icons don't go back to their original arrangement. Grrrrr. Fast-forward a few days or weeks; I've got my icons arranged so I can find stuff again - then I go to a new office and it starts all over again. I'm tired of this. Is it possible to make "Extend these displays" the default behavior? I'm using Windows 8 x64 Home Premium, but I had the same complaint under Windows 7 x64 Ultimate. (Prior to that, I hadn't discovered the joy of dual displays. Ah, the time I wasted...)

    Read the article

  • De-duplicate Firefox bookmarks

    - by Zoredache
    What methods exist to de-duplicate Firefox bookmarks. As I search Google I find that there previously was a plugin called CheckPlaces, but that no longer seems to exist. Another popular suggestion seems to be AM-DeadLink, which I tried, but it completely trashed my bookmarks. (Fortunately I had a backup first, and yes I had closed Firefox first as instructed). I was trying to move all my youtube.com bookmarks into a folder. I tried doing a search, and then dragging the bookmarks into the folder. Apparently this creates a copy, instead of moving them as I expected. So now I have 3 of everything since I had tried a couple times.

    Read the article

  • A Duplicate name exists on the network

    - by Adam
    Recently we changed out office IT structure from having a dedicated server to be the DC, a dedicated server for the exchange etc... (Each running Windows Server 2003 R2) Now we have a single server running Windows SBS 2008 and created a new domain (with a different domain name) We then changed every PC so it connected to the new domain and renamed every PC with a new naming structure. After I had done this, we were getting several PCs that would get the following message just before the login screen (Alt+Ctrl+Del Screen) A Duplicate name exists on the network I have checked the ADUC and have removed the trouble PCs from the list and renamed each PC and changed the SID before connecting back onto the domain but still getting this message. I have tried everything that i can think of but still getting the problem. Any help would be greatly appreticated.

    Read the article

  • Run Sinatra via a Rake Task to Generate Static Files?

    - by viatropos
    I'm not sure I can put this correctly but I'll give it a shot. I want to use Sinatra to generate static html files once I am ready to deploy an application, so the resulting final website would be pure static HTML. During development, however, I want everything to be dynamic so I can use Haml and straight Ruby code to make things fast/dry/clear. I don't want to use Jekyll or some of the other static site generators out there because they don't have as much power as Sinatra. So all I basically need to be able to do is run a rake task such as: rake sinatra:generate_static_files. That would run the render commands for Haml and everything else, and the result would be written to files. My question is, how do I do that with Sinatra in a Rake task? Can I do it in a Rake task? The problem is, I don't know how to include the Sinatra::Application in the rake task... The only other way I could think of doing it is using net/http to access a URL that does all of that, but that seems like overkill. Any ideas how on to solve this?

    Read the article

  • Bulk Rename Tool is a Lightweight but Powerful File Renaming Tool

    - by Jason Fitzpatrick
    There’s no need to settle for overly simplistic file renaming tools as long as Bulk Rename Tool is around. It’s lightweight, insanely customizable, portable, and sure to make short work of any renaming task you throw at it. Bulk Rename Tool is a great portable application (available as an installed version if you crave context menu integration) that blasts through file renaming tasks. The main panel is intimidatingly packed with toggles and variables you can alter; this isn’t a one-click solution by any means. That said, once you get comfortable using the interface it’s lightening fast and extremely flexible. One tip that will save you an enormous amount of frustrating when you get started: make sure to highlight the files you want to change in the file preview window (located in the upper right corner) or else you won’t see the preview and won’t know if the changes you’re making in the control panel are yielding the file names you desire. Hit up the link below to read more and grab a copy; Bulk Rename Tool is free, Windows only. Bulk Rename Tool Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • How do I create an encrypted file system inside a file?

    - by darent
    Recently i've found this interesting tutorial: http://flossstuff.wordpress.com/2011/08/07/using-a-file-as-a-storage-device/ It explains how to create an empty file, format it as ext4, and mount it as a device. I'd like to know if it can be created as an encrypted ext4 file system. I've tried using palimpsest (the disk utility found in System menu) to format the already created file system but it doesn't works as it detects the file system being used. If I try to unmount the file system, it won't work neither because it doesn't detect the device (since it's not a real device like a hardrive or a usb drive). So my question is, is there an option to create the file system encrypted from the begining? I've used these commands: Create an empty file 200Mb size: dd if=/dev/zero of=/path/to/file bs=1M count=200 Make it ext4: mkfs -t ext4 file Mount it in a folder inside my home: sudo mount -o loop file /path/to/mount_point Is there any way the mkfs command creates the ext4 encrypted asking for a decryption password? I'm planing to use this as a way to encrypt files inside Dropbox. Thanks for your time.

    Read the article

  • Why does my system slow down or freeze when there is heavy disk activity?

    - by user72270
    Im a first-time user to Ubuntu-12.04 with WUBI installation. My NoteBook Information : Dell vostro 3450 : i5 2410m, 3 gb ram, intel hd3000, amd 6630m hybrid. Surfing and playing games works flawlessly, however, I'm having huge problems when installing applications and generally copying and moving files. When doing so, system is significantly slower and freezes quite often (Firefox gets bluish, sometimes even black n white). I would say that Ubuntu allocates too much resources on file transfers and installing, but even these tasks are very slow. Here is very specific example : today, i tried to move 6 GB file from win 7 installation. It was good at first, i jumped to firefox but after a while firefox started to randomly turn bluish and mouse was randomly stopping working. It was gradually worse and worse and it got to a point when firefox black n whited and mouse wasn't working at all. I raged and went for some meal, when i got back screen was black. It probably unlogged me due to inactivity, when i pushed random button to bring screen to life i had to wait few minutes to let it show me only my screen background. No log in screen, just background and working mouse. NoteBook fan was working at 100 % so I assumed that file transfer was going on and I left it to work. Nothing then changed for a full hour so I hard rebooted it. File transfer unsuccessful, It transfered hardly 2 gigs. Is this normal ? What to do in these situations ? It didn't let me load system manager and not even terminal. Thanks.

    Read the article

  • Why do I get this error "EMCIDeviceError" when opening some wav files in my program.

    - by Roy
    Hey I have this program that has been working fine until I tried to open this one wav file? Not sure what the problem is or that I understand it? Do I need to find a new component to use for this file or what? I am using Delphi 4 Pro and the standard VCL component for Media Player. I am looking for a good new component that offers more help with wav and mp3 files too but not found what I am looking for yet?

    Read the article

  • How to lock files in a tomcat web application?

    - by yankee
    The Java manual says: The locks held on a particular file by a single Java virtual machine do not overlap. The overlaps method may be used to test whether a candidate lock range overlaps an existing lock. I guess that if I lock a file in a tomcat web application I can't be sure that every call to this application is done by a different JVM, can I? So how can I lock files within my tomcat application in a reliable way?

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Duplicate Symbol Linker Error (C++ help)

    - by Vash265
    Hi. I'm learning some CSP (constraint satisfaction) theory stuff right now, and am using this library to parse XML files. I'm using Xcode as an IDE. My program compiles fine, but when it goes to link the files, I get a duplicate symbol error with the XMLParser_libxml2.hh file. My files are separated as such: A class header file that includes the XMLParser file above A class implementation file that include the class header file A main file that includes the class header file The duplicate symbol is occurring in main.o and classfile.o, but as far as I can tell, I'm not actually adding that .hh file twice. Full error: ld: duplicate symbol bool CSPXMLParser::UTF8String::to<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) constin /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/dStructFill.o and /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/main.o Copying the implementation of the class into the main file and taking the class implementation file out of the compilation target alleviates the error, but it's a disorganized mess this way, and I'll be adding more classes very soon (and it would be nice to have them in separate files). As I've come to understand it, this is caused by the file (XMLParser_libxml2.hh) having both the class and function definition and implementation in one file (and it seems as though this might have been necessary due to the use of templates in that 'header' file). Any ideas on how to get around sticking all my class files in my main.cpp? (I've tried ifdefs, they don't work).

    Read the article

  • Is there a way to identify that a file has been modified and moved?

    - by Eric
    I'm writing an application that catalogs files, and attributes them with extra meta data through separate "side-car" files. If changes to the files are made through my program then it is able to keep everything in sync between them and their corresponding meta data files. However, I'm trying to figure out a way to deal with someone modifying the files manually while my program is not running. When my program starts up it scans the file system and compares the files it finds to it's previous record of what files it remembers being there. It's fairly straight forward to update after a file has been deleted or added. However, if a file was moved or renamed then my program sees that as the old file being deleted, and the new file being added. Yet I don't want to loose the association between the file and its metadata. I was thinking I could store a hash from each file so I could check to see if newly found files were really previously known files that had been moved or renamed. However, if the file is both moved/renamed and modified then the hash would not match either. So is there some other unique identifier of a file that I can track which stays with it even after it is renamed, moved, or modified?

    Read the article

  • Entityframework duplicate record on second insert

    - by Delysid
    I am building an application for recipe/meal planning, and i have come across a problem i cant seem to figure out. i have a table for units of measure, where i keep the used units in, i only want unique units in here (for grocery list calculation and so forth) but if i use a unit from the table on a recipe, the first time it is okay, nothing is inserted in units of measure, but the second time i get a "duplicate". i suspect it has something to do with entitykey, because the primary key is identity column on the sql server (2008 r2) for some reason it works to change the objectstate on some objects (courses, see code) and that does not generate a duplicate, but that does not work on the unit of measure my insert methods looks like this : public recipe Create(recipe recipe) { using (RecipeDataContext ctx = new RecipeDataContext()) { foreach (recipe_ingredient rec_ing in recipe.recipe_ingredient) { if (rec_ing.ingredient.ingredient_id == 0) { ingredient ing = (from _ing in ctx.ingredients where _ing.name == rec_ing.ingredient.name select _ing).FirstOrDefault(); if (ing != null) { rec_ing.ingredient_id = ing.ingredient_id; rec_ing.ingredient = null; } } if (rec_ing.unit_of_measure.unit_of_measure_id == 0) { unit_of_measure _uom = (from dbUom in ctx.unit_of_measure where dbUom.unit == rec_ing.unit_of_measure.unit select dbUom).FirstOrDefault(); if (_uom != null) { rec_ing.unit_of_measure_id = _uom.unit_of_measure_id; rec_ing.unit_of_measure = null; } } ctx.Recipes.AddObject(recipe); //for some reason it works to change object state of this, and not generate a duplicate ctx.ObjectStateManager.ChangeObjectState(recipe.courses[0], EntityState.Unchanged); } ctx.SaveChanges(); } return recipe; } My datamodel looks like this : http://i.imgur.com/NMwZv.png

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >