Search Results

Search found 12992 results on 520 pages for 'box'.

Page 92/520 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • JQuery: addClass() not changing background on selector

    - by centr0
    im having a little trouble getting the background image to swap out on click() $('.highlight-boxes li a[class!=selected-box]').click(function() { $('.highlight-content').hide(); $('.highlight-boxes li a').removeClass(); $(this).addClass('box-selected'); // problem here var selected = $(this).attr('href').substr(1); $('#' + selected).stop(true,true).fadeIn(); return false; }); console.log() in firebug returns the correct element being clicked but $(this).addClass('box-selected') does not change the background of the currently clicked element. any ideas? TIA

    Read the article

  • How to clear a textbox based on a checkbox using Javascript

    - by LoftyWofty
    I am trying to code in javascript (to avoid validation triggers at the server) to clear a text box if the checkbox associated with it is unchecked. I have this code ... <input type="checkbox" id="chkOTHER" onclick="document.getElementById('txtOtherFlag').value='';" /> <asp:TextBox ID="txtOtherFlag" runat="server" AutoPostBack="True" CausesValidation="True" ValidationGroup="ValidationGroup1"></asp:TextBox> The problem is the Javascript inside the checkbox is not triggering to remove the value in the text box. Even if this worked, it's incorrect as it would blank out the text box every time the checkbox is triggered whether it is checked or not. I need to resolve this in the client side only. Thank you

    Read the article

  • SSH error: "Corrupted MAC on input" or "Bad packet length"

    - by William Ting
    I have 3 boxes set up as shown: The DFW box can communicate to the SFO / internet just fine, and I send files AUS - DFW. However, every time I trying transferring DFW - AUS it fails over SSH (ssh client, rsync, scp, sftp, etc) with the following error: Corrupted MAC on input. Disconnecting: Packet corrupt Occasionally I'll get a different error: Bad packet length 2097180. Disconnecting: Packet corrupt I've restarted the DFW box, as well as replaced the network cable. I'm not sure what else might be causing problems. Right now to get files from DFW I have to use DFW - SFO - AUS which is not optimal.

    Read the article

  • How to Play FLAC Files in Windows 7 Media Center & Player

    - by Mysticgeek
    An annoyance for music lovers who enjoy FLAC format, is there’s no native support for WMP or WMC. If you’re a music enthusiast who prefers FLAC format, we’ll look at adding support to Windows 7 Media Center and Player. For the following article we are using Windows 7 Home Premium 32-bit edition. Download and Install madFLAC v1.8 The first thing we need to do is download and install the madFLAC v1.8 decoder (link below). Just unzip the file and run install.bat… You’ll get a message that it has been successfully registered, click Ok. To verify everything is working, open up one of your FLAC files with WMP, and you’ll get the following message. Check the box Don’t ask me again for this extension and click Yes. Now Media Player should play the track you’ve chosen.   Delete Current Music Library But what if you want to add your entire collection of FLAC files to the Library? If you already have it set up as your default music player, unfortunately we need to remove the current library and delete the database. The best way to manage the music library in Windows 7 is via WMP 12. Since we don’t want to delete songs from the computer we need to Open WMP, press “Alt+T” and navigate to Tools \ Options \ Library.   Now uncheck the box Delete files from computer when deleted from library and click Ok. Now in your Library click “Ctrl + A” to highlight all of the songs in the Library, then hit the “Delete” key. If you have a lot of songs in your library (like on our system) you’ll see the following dialog box while it collects all of the information.   After all of the data is collected, make sure the radio button next to Delete from library only is marked and click Ok. Again you’ll see the Working progress window while the songs are deleted. Deleting Current Database Now we need to make sure we’re starting out fresh. Close out of Media Player, then we’ll basically follow the same directions The Geek pointed out for fixing the WMP Library. Click on Start and type in services.msc into the search box and hit Enter. Now scroll down and stop the service named Windows Media Player Network Sharing Service. Now, navigate to the following directory and the main file to delete CurrentDatabase_372.wmdb %USERPROFILE%\Local Settings\Application Data\Microsoft\Media Player\ Again, the main file to delete is CurrentDatabase_372.wmdb, though if you want, you can delete them all. If you’re uneasy about deleting these files, make sure to back them up first. Now after you restart WMP you can begin adding your FLAC files. For those of us with large collections, it’s extremely annoying to see WMP try to pick up all of your media by default. To delete the other directories go to Organize \ Manage Libraries then open the directories you want to remove. For example here we’re removing the default libraries it tries to check for music. Remove the directories you don’t want it to gather contents from in each of the categories. We removed all of the other collections and only added the FLAC music directory from our home server. SoftPointer Tag Support Plugin Even though we were able to get FLAC files to play in WMP and WMC at this point, there’s another utility from SoftPointer to add. It enables FLAC (and other file formats) to be picked up in the library much easier. It has a long name but is effective –M4a/FLAC/Ogg/Ape/Mpc Tag Support Plugin for Media Player and Media Center (link below). Just install it by accepting the defaults, and you’ll be glad you did. After installing it, and re-launching Media Player, give it some time to collect all of the data from your FLAC directory…it can take a while. In fact, if your collection is huge, just walk away and let it do its thing. If you try to use it right away, WMP slows down considerably while updating the library.   Once the library is setup you’ll be able to play your FLAC tunes in Windows 7 Media Center as well and Windows Media Player 12.   Album Art One caveat is that some of our albums didn’t show any cover art. But we were usually able to get it by right-clicking the album and selecting Find album info.   Then confirming the album information is correct…   Conclusion Although this seems like several steps to go through to play FLAC files in Windows 7 Media Center and Player, it seems to work really well after it’s set up. We haven’t tried this with a 64-bit machine, but the process should be similar, but you might want to make sure the codecs you use are 64-bit. We’re sure there are other methods out there that some of you use, and if so leave us a comment and tell us about it. Download madFlac V1.8  M4a/FLAC/Ogg/Ape/Mpc Tag Support Plugin for Media Player and Media Center from SoftPointer Similar Articles Productive Geek Tips How to Play .OGM Video Files in Windows VistaFixing When Windows Media Player Library Won’t Let You Add FilesUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Kantaris is a Unique Media Player Based on VLCEasily Change Audio File Formats with XRECODE TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7?

    Read the article

  • Approximating walking physics via simpler sliding physics

    - by Dave
    I am modeling walking insects. I implement them as cuboids and use forces (including friction and drag), to control motion. However, the movement characteristics of this 'sliding box' physics don't match those due to a legged creature. For example, legged creatures near-instantly accelerate to their top speed; whereas applying a force to a box takes time to accelerate it. The applied force can be increased along with the counteracting drag, giving much quicker acceleration (via force) to a max speed (via drag). However, this also means the force that creatures can exert when pushing on other objects is increased. Does anyone know of any techniques using a physics engine to cheaply model walking creatures?

    Read the article

  • Increase Font Size of CHM files

    - by Rohit Gupta
    This may be the way to do it: 1. From the CHM Menu click Options 2. Click Internet Options 3. From Internet Options window click Accessibility button 4. Check box Ignore font styles specified on Web pages 5. Check box Ignore font sizes specified on Web pages 6. Click OK 7. Click Fonts button and select the font you want 8. Click OK That's it.

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • BoundingBox created from mesh to origin, making it bigger

    - by Gunnar Södergren
    I'm working on a level-based survival game and I want to design my scenes in Maya and export them as a single model (with multiple meshes) into XNA. My problem is that when I try to create Bounding Boxes(for Collision purposes) for each of the meshes, the are calculated from origin to the far-end of the current mesh, so to speak. I'm thinking that it might have something to do with the position each mesh brings from Maya and that it's interpreted wrongly... or something. Here's the code for when I create the boxes: private static BoundingBox CreateBoundingBox(Model model, ModelMesh mesh) { Matrix[] boneTransforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(boneTransforms); BoundingBox result = new BoundingBox(); foreach (ModelMeshPart meshPart in mesh.MeshParts) { BoundingBox? meshPartBoundingBox = GetBoundingBox(meshPart, boneTransforms[mesh.ParentBone.Index]); if (meshPartBoundingBox != null) result = BoundingBox.CreateMerged(result, meshPartBoundingBox.Value); } result = new BoundingBox(result.Min, result.Max); return result; } private static BoundingBox? GetBoundingBox(ModelMeshPart meshPart, Matrix transform) { if (meshPart.VertexBuffer == null) return null; Vector3[] positions = VertexElementExtractor.GetVertexElement(meshPart, VertexElementUsage.Position); if (positions == null) return null; Vector3[] transformedPositions = new Vector3[positions.Length]; Vector3.Transform(positions, ref transform, transformedPositions); for (int i = 0; i < transformedPositions.Length; i++) { Console.WriteLine(" " + transformedPositions[i]); } return BoundingBox.CreateFromPoints(transformedPositions); } public static class VertexElementExtractor { public static Vector3[] GetVertexElement(ModelMeshPart meshPart, VertexElementUsage usage) { VertexDeclaration vd = meshPart.VertexBuffer.VertexDeclaration; VertexElement[] elements = vd.GetVertexElements(); Func<VertexElement, bool> elementPredicate = ve => ve.VertexElementUsage == usage && ve.VertexElementFormat == VertexElementFormat.Vector3; if (!elements.Any(elementPredicate)) return null; VertexElement element = elements.First(elementPredicate); Vector3[] vertexData = new Vector3[meshPart.NumVertices]; meshPart.VertexBuffer.GetData((meshPart.VertexOffset * vd.VertexStride) + element.Offset, vertexData, 0, vertexData.Length, vd.VertexStride); return vertexData; } } Here's a link to the picture of the mesh(The model holds six meshes, but I'm only rendering one and it's bounding box to make it clearer: http://www.gsodergren.se/portfolio/wp-content/uploads/2011/10/Screen-shot-2011-10-24-at-1.16.37-AM.png The mesh that I'm refering to is the Cubelike one. The cylinder is a completely different model and not part of any bounding box calculation. I've double- (and tripple-)-checked that this mesh corresponds to this bounding box. Any thoughts on what I'm doing wrong?

    Read the article

  • Copying content on webpages in safari. To HTML

    - by Carl Smith
    Hi, is there an easier way to copy and paste website content in html? Want to copy and look like this. Product Information: Length: S / M / L Material: Polyester and Elasthane Brand: Roxana Exclusive Style: Basque But when i paste it into my content box it looks like this- Product Information Length: S / M / L Material: Polyester and Elasthane Brand: Roxana Exclusive Style: Basque Then i need to edit it in the html editor to rearrange it. Is the some sort of app or plugin that i can get so i can turn the text of the page into html so it looks right straight away when i copy it into my content box? If that makes any sense? Thanks Carl Smith :-)

    Read the article

  • Installing pgAdmin III for postgreSQL 9.2

    - by Mikey
    I have a windows server that runs postgresql 9.2. I want to hit it using pgAdmin III from my Ubuntu Linux 12.10 workstation box. I installed pgAdmin III from synaptic and also tried direct download from postgreSQL site using software installer. Regardlesss, I can get only get pgAdmin III for postgresql 9.1. When I run pgAdmin III and point to my server I get an error message telling me that the database is 9.2 and my pgAdmin III is for 9.1, isn't compatible with 9.2. I can access the server itself fine OK from the Ubuntu box - I have Python programs that hit the database with no problems - but I need pgAdmin III for 9.2 running under Ununtu Linux 12.10. Is it available? Where do I get it?

    Read the article

  • PandoraBar Packs Pandora Radio Client into a Compact Case

    - by Jason Fitzpatrick
    This stylish and compact build makes it easy to enjoy streaming radio without the bulk and overhead of running your entire computer to do so. Check out the video to see the compact streaming radio box in action. Courtesy of tinker blog Engscope, we find this clean Pandora-client-in-box build. Currently the project blog has a cursory overview of the project with the demo video but promises future updates detailing the software and hardware components of the build. If you can’t wait that long, make sure to check out some of the previous Wi-Fi radio builds we’ve shared: DIY Wi-Fi Radio Brings Wireless Tunes Anywhere in Your House and Wi-Fi Speakers Stream Music Anywhere. Pandobar [via Hacked Gadgets] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • SQLAuthority News – New Book Released – SQL Server Interview Questions And Answers

    - by pinaldave
    Two days ago, on birthday of my blog – I asked simple question – Guess! What is in this box? I have received lots of interesting comments on the blog about what is in it. Many of you got it absolutely incorrect and many got it close to the right answer but no one got it 100% correct. Well, no issue at all, I am going to give away the price to whoever has the closest answer first in personal email. Here is the answer to the question about what is in the box? Here it is – the box has my new book. In fact, I should say our new book as I co-authored this book with my very good friend Vinod Kumar. We had real blast writing this book together and had lots of interesting conversation when we were writing this book. This book has one simple goal – “master the basics.” This book is not only for people who are preparing for interview. This book is for every one who wants to revisit the basics and wants to prepare themselves to the technology. One always needs to have practical knowledge to do their duty efficiently. This book talks about more than basics. There are multiple ways to present learning – either we can create simple book or make it interesting. We have decided the learning should be interactive and have opted for Interview Questions and Answer format. Here is quick interview which we have done together. Details of the books are here The core concept of this book will continue to evolve over time. I am sure many of you will come along with us on this journey and submit your suggestions to us to make this book a key reference for anybody who wants to start with SQL server. Today we want to acknowledge the fact that you will help us keep this book alive forever with the latest updates. We want to thank everyone who participates in this journey with us. You can get the books from [Amazon] | [Flipkart]. Read Vinod‘s blog post. Do not forget to wish him happy birthday as today is his birthday and also book release day – two reason to wish him congratulations. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Ubuntu Installation-Allocate drive space/Boot Loader

    - by user10134
    When I try to install ubuntu 10.10 from the official livedisc I got in the mail, when I get to the "Allocate Disk Space" step I cannot get it to work. I shrank my win7 partition so I have unallocated space, then I tried using the space while it is formatted in NTFS, but the partitions will not show up in the box. /dev/sda is selected under boot loader, and I can't select anything else, but the partition box is blank so when I click "install ubuntu" it just says: "No root file system is defined. Please correct this from the partitioning menu." -I am trying to dual-boot win7 and ubuntu, but I was never asked in the install process whether I would like to install just ubuntu or dual-boot?

    Read the article

  • Finding Those Pesky Unicode Characters in Visual Studio

    - by fallen888
    Sometimes I’m handed HTML that I need to wire up and I find these characters.  Usually there are only a couple on the page and, while annoying to find, it’s not a big deal.  Recently I found dozens and dozens of these guys on a page and wasn’t very happy at the prospect of having to manually search them all out and remove/replace them.  That is, until I did some research and found this very  helpful article by Aaron Jensen - Finding Non-ASCII Characters with Visual Studio. Aaron’s wonderful solution: Try searching your code with the following regular expression: [^\x00-\x7f] Open any of Visual Studio’s find windows and enter the regular expression above into the “Find what:” text box. Click the “Find Options” plus sign to expand the list of options. Check the last box “Use:” and choose “Regular expressions” from the drop down menu. Easy and efficient.  Thanks, Aaron!

    Read the article

  • How can I get my dynamic site search results content indexed by Google?

    - by Kris
    I have a site that is simply a search box to search a cloud-hosted database of .tiff images, and then all of my content can only be accessed by entering a search term. So for example, you're on the home page www.example.com and you type in "search" to the box and hit submit. Then it takes you to www.example.com/?q=search, which is a page of all my .tiff images with "search" in the description. How can I get a page like www.example.com/?q=search indexed, WITHOUT making a humungous list of search terms that people might type in?? I know about mod_rewrite, but it seems like for that you need to know ahead of time which URLs you'll need to convert, which I don't. All of these pages will be dynamically user-generated by typing into the search field. Please help!

    Read the article

  • Technology/Programming mailing lists How do you manage?

    - by AdityaGameProgrammer
    Email Alerts, Blog /Forum updates, discussion subscriptions general programming/technology update emails that we often subscribe to.Do you actually read them ? or go direct to the source when you find time. Often we might the mail of programmers filled with loads of unread subscription mail from technology they previously were following or worked on or things they wish to follow .Some or a majority of these mail just keep on piling up . I personally have few updates that i wish i read but constantly avoid and keep of for latter and finally delete them in effort keep the in box clean. Few questions come to mind regarding this Do you keep such mail in separate accounts? Do you read all the mail you have subscribed to? Do you ever unsubscribe to any such email if you aren't reading them? How much do you really value these email. Lastly do you keep your in box clean ? wish to deal with this in a better way.

    Read the article

  • Before and After the Programmer Life [closed]

    - by Gio Borje
    How were people before and after becoming programmers through a black box? In a black box, the implementation is irrelevant; therefore, we focus on the input and output: High School Nerd -> becomeProgrammer() -> Manager (Person Before) -> (Person as Programmer) -> (New Person) (Person Before) -> (Person as Programmer) How are the typical lives of people before they became programmers? Why do people pursue life as a programmer? (Person as Programmer) -> (New Person) How has becoming a programmer changed people afterwards? Why quit being a programmer? Anecdotes would be nice. If many programmers have similar backgrounds and fates, do you think that there is some sort of stereotypical person that are destined to become programmers?

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • Exposing an MVC Application Through SharePoint

    - by Damon
    Below you will find my presentation slides and demo files for my SharePoint TechFest 2010 presentation on Exposing an MVC Application through SharePoint.  One of the points I forgot to mention goes back to the performance and licensing benefits of this approach.  If you have a SharePoint box that is completely slammed, you can put the MVC application on a separate web server and essentially offload the application processing to another server.  In terms of licensing, you can leave SharePoint off that new server and just access SharePoint data via web services from the box.  This makes it a lot cheaper if you have MOSS - but if you're just running WSS then it may not have as many cost benefits.  Remember, programming against the web services is not always the easiest thing, so you have to weight the cost/benefit ratio when making such a determination.

    Read the article

  • Hey You! Stop Using the Apply Button and Just Click OK! [Geek Rants]

    - by The Geek
    As a computer geek, I often find myself helping people, and watching them change settings on their PC… and they almost always click the Apply button, and then the OK button. Why? Whenever you encounter a dialog box in Windows, there are the standard OK, Cancel, Apply buttons—but you don’t actually have to click the Apply button first. The OK button does the same thing, saves the settings, and then closes the dialog box… saving you an extra click. Don’t believe me? Try it out for yourself. Only the worst possible application won’t behave that way, and you probably don’t want to use that type of application to begin with. The only exception to this rule is a multiple tab dialog box, on a badly written application. Sometimes… your settings on one tab won’t stick unless you click Apply. Note that in this particular case, you can make changes in any one of the tabs, and they will carry through without having to click Apply, because this dialog window is well written. We’re just using the screenshot as an example of a multiple tab setting interface. So now that you know better, you can tell us… do you always use the Apply button first? Have you ever found an instance where it behaves differently? Similar Articles Productive Geek Tips Got Awesome Skills? Why Not Write for How-To Geek?Customize Your Windows Vista Logon ScreenUse Outlook Rules to Prevent "Oh No!" After Sending EmailsGot Awesome Geek Skills? The How-To Geek is Looking for WritersQuick Firefox UI Tweaks TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How can I change my cursor behavior?

    - by Doug Clement
    When I am typing, my mouse cursor, if left on the text, will eventually auto-click in whatever space in the the text box I happen to be, causing me to type in the middle of a sentence. Also, the cursor in the text box will frequently stop mid-word and the screen will scroll down all of a sudden when pressing the space bar while typing. My question is, how do I change this behavior because it is driving me absolutely bat crap crazy. I have an Acer Aspire One D257 Netbook. I am not sure if it's a Xubuntu problem because it does this while I am using Windows 7 too. Any help would be nice, thanks!

    Read the article

  • How to create projection/view matrix for hole in the monitor effect

    - by Mr Bell
    Lets say I have my XNA app window that is sized at 640 x 480 pixels. Now lets say I have a cube model with its poly's facing in to make a room. This cube is sized 640 units wide by 480 units high by 480 units deep. Lets say the camera is somewhere in front of the box looking at it. How can I set up the view and projection matrices such that the front edge of the box lines up exactly with the edges of the application window? It seems like this should probably involve the Matrix.CreatePerspectiveOffCenter method, but I don't fully understand how the parameters translate on to the screen. For reference, the end result will be something like Johhny Lee's wii head tracking demo: http://www.youtube.com/watch?v=Jd3-eiid-Uw&feature=player_embedded P.S. I realize that his source code is available, but I am afraid I haven't been able to make heads or tails out of it.

    Read the article

  • Should devs, testers and business users have one unified test script?

    - by Carlos Jaime C. De Leon
    In development, I would normally have my own test scripts that would document the data, scenarios and execution steps that I plan to test; this is my dev test plan. When the functionality has been deployed to Test, testers test it using their own test script that they wrote. In UAT, the business user then tests using their own test plan. In retrospect, it looks like this provides a better coverage, with dev tests having a mix of black and white box testing, while testers and business users focus on black box testing. But on the other hand, this brings up distinct test cases that only are executed per stage (ie. some cases which testers thought of are only executed on Test stage) and it would like the dev missed it, which makes it a finding/bug. Is it worth consolidating the test scripts from the start? Thus using one unified test script, or is it abit difficult to do this upfront?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >