Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 398/1833 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • Hype and LINQ

    - by Tony Davis
    "Tired of querying in antiquated SQL?" I blinked in astonishment when I saw this headline on the LinqPad site. Warming to its theme, the site suggests that what we need is to "kiss goodbye to SSMS", and instead use LINQ, a modern query language! Elsewhere, there is an article entitled "Why LINQ beats SQL". The designers of LINQ, along with many DBAs, would, I'm sure, cringe with embarrassment at the suggestion that LINQ and SQL are, in any sense, competitive ways of doing the same thing. In fact what LINQ really is, at last, is an efficient, declarative language for C# and VB programmers to access or manipulate data in objects, local data stores, ORMs, web services, data repositories, and, yes, even relational databases. The fact is that LINQ is essentially declarative programming in a .NET language, and so in many ways encourages developers into a "SQL-like" mindset, even though they are not directly writing SQL. In place of imperative logic and loops, it uses various expressions, operators and declarative logic to build up an "expression tree" describing only what data is required, not the operations to be performed to get it. This expression tree is then parsed by the language compiler, and the result, when used against a relational database, is a SQL string that, while perhaps not always perfect, is often correctly parameterized and certainly no less "optimal" than what is achieved when a developer applies blunt, imperative logic to the SQL language. From a developer standpoint, it is a mistake to consider LINQ simply as a substitute means of querying SQL Server. The strength of LINQ is that that can be used to access any data source, for which a LINQ provider exists. Microsoft supplies built-in providers to access not just SQL Server, but also XML documents, .NET objects, ADO.NET datasets, and Entity Framework elements. LINQ-to-Objects is particularly interesting in that it allows a declarative means to access and manipulate arrays, collections and so on. Furthermore, as Michael Sorens points out in his excellent article on LINQ, there a whole host of third-party LINQ providers, that offers a simple way to get at data in Excel, Google, Flickr and much more, without having to learn a new interface or language. Of course, the need to be generic enough to deal with a range of data sources, from something as mundane as a text file to as esoteric as a relational database, means that LINQ is a compromise and so has inherent limitations. However, it is a powerful and beautifully compact language and one that, at least in its "query syntax" guise, is accessible to developers and DBAs alike. Perhaps there is still hope that LINQ can fulfill Phil Factor's lobster-induced fantasy of a language that will allow us to "treat all data objects, whether Word files, Excel files, XML, relational databases, text files, HTML files, registry files, LDAPs, Outlook and so on, in the same logical way, as linked databases, and extract the metadata, create the entities and relationships in the same way, and use the same SQL syntax to interrogate, create, read, write and update them." Cheers, Tony.

    Read the article

  • Gradle for NetBeans RCP

    - by Geertjan
    Start with the NetBeans Paint Application and do the following to build it via Gradle (i.e., no Gradle/NetBeans plugin is needed for the following steps), assuming you've set up Gradle. Do everything below in the Files or Favorites window, not in the Projects window. In the application directory "Paint Application". Create a file named "settings.gradle", with this content: include 'ColorChooser', 'Paint' Create another file in the same location, named "build.gradle", with this content: subprojects { apply plugin: "announce" apply plugin: "java" sourceSets { main { java { srcDir 'src' } resources { srcDir 'src' } } } } In the module directory "Paint". Create a file named "build.gradle", with this content: dependencies { compile fileTree("$rootDir/build/public-package-jars").matching { include '**/*.jar' } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } Note: The above is a temporary solution, as you can see, the expectation is that the JARs are in the 'build/public-packages-jars' folder, which assumes an Ant build has been done prior to the Gradle build. Now run 'gradle classes' in the "Paint Application" folder and everything will compile correctly. So, this is how the Paint Application now looks: Preferable to the second 'build.gradle' would be this, which uses the JARs found in the NetBeans Platform... netbeansHome = '/home/geertjan/netbeans-dev-201111110600' dependencies { compile files("$rootDir/ColorChooser/release/modules/ext/ColorChooser.jar") def projectXml = new XmlParser().parse("nbproject/project.xml") projectXml.configuration.data."module-dependencies".dependency."code-name-base".each { if (it.text().equals('org.openide.filesystems')) { def dep = "$netbeansHome/platform/core/"+it.text().replace('.','-')+'.jar' compile files(dep) } else if (it.text().equals('org.openide.util.lookup') || it.text().equals('org.openide.util')) { def dep = "$netbeansHome/platform/lib/"+it.text().replace('.','-')+'.jar' compile files(dep) } else { def dep = "$netbeansHome/platform/modules/"+it.text().replace('.','-')+'.jar' compile files(dep) } } } task show << { configurations.compile.each { dep -> println "$dep ${dep.isFile()}" } } However, when you run 'gradle classes' with the above, you get an error like this: geertjan@geertjan:~/NetBeansProjects/PaintApp1/Paint$ gradle classes :Paint:compileJava [ant:javac] Note: Attempting to workaround javac bug #6512707 [ant:javac] [ant:javac] [ant:javac] An annotation processor threw an uncaught exception. [ant:javac] Consult the following stack trace for details. [ant:javac] java.lang.NullPointerException [ant:javac] at com.sun.tools.javac.util.DefaultFileManager.getFileForOutput(DefaultFileManager.java:1058) No idea why the above happens, still trying to figure it out. Once the above works, we can start figuring out how to use the NetBeans Maven repo instead and then the user of the plugin will be able to select whether to use local JARs or JARs from the NetBeans Maven repo. Many thanks to Hans Dockter who put the above together with me today, via Skype!

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • laptop will not boot after attempting upgrade to 13.10

    - by naerwenya
    I wanted to update my OS to the new 14.04 LTS. I was running 12.04 LTS before and followed the advice that upgrades should be done in steps. I first upgraded to 12.10, which seemed to work fine, and later to 13.10. This was all done through the Software Manager. After the last upgrade, I am no longer able to boot up my computer. The GNU GRUB menu opens up, but after selecting Ubuntu, it just stalls with a blank purple screen. If I select one of the other kernels, it also stalls after "Loading initial ramdisk...". I can't get into the Recovery Menu, either. I'm still rather new to Linux and may have possibly made the situation worse. Unfortunately, nothing has worked yet. I tried reinstalling from a flash drive and on my first attempt, the wizard recognised a previous installation. Unfortunately, the wizard also didn't like how my partitions were set up (I didn't change anything) and gave an error before closing. Unfortunately, I didn't write the error down, but it was about the boot partition. On the next attempt and ever since, the installation wizard has stated that "This computer currently has no detected operating systems." This is strange, because I could see the disk and even access my files when booting up from the USB. At this point, I decided to back up my important files using dropbox. Before losing all my files, I wanted to try the Boot-Repair tool, which also produced no results, and the files are no longer visible when booting from USB. The link to the Boot-Repair log is at http://paste.ubuntu.com/7457249/. If I then proceed through to the "Something else" installation option, I can see that the partitions still exist. This is what they look like: /dev/sda free space (size indicated 1MB) /dev/sda1 efi (33MB of 98 MB used) /dev/sda2 efi (352634MB of 746330MB used) /dev/sda3 swap (3725MB, none used) free space (0MB) Is there any way I might be able to get my computer to work and preserve my files as well?

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 09, 2011

    CodePlex Daily Summary for Wednesday, November 09, 2011Popular ReleasesMapWindow 4: MapWindow GIS v4.8.6 - Final release - 64Bit: What’s New in 4.8.6 (Final release)A few minor issues have been fixed What’s New in 4.8.5 (Beta release)Assign projection tool. (Sergei Leschinsky) Projection dialects. (Sergei Leschinsky) Projections database converted to SQLite format. (Sergei Leschinsky) Basic code for database support - will be developed further (ShapefileDataClient class, IDataProvider interface). (Sergei Leschinsky) 'Export shapefile to database' tool. (Sergei Leschinsky) Made the GEOS library static. geos.dl...NewLife XCode ??????: XCode v8.2.2011.1107、XCoder v4.5.2011.1108: v8.2.2011.1107 ?IEntityOperate.Create?Entity.CreateInstance??????forEdit,????????(FindByKeyForEdit)???,???false ??????Entity.CreateInstance,????forEdit,???????????????????? v8.2.2011.1103 ??MS????,??MaxMin??(????????)、NotIn??(????)、?Top??(??NotIn)、RowNumber??(?????) v8.2.2011.1101 SqlServer?????????DataPath,?????????????????????? Oracle?????????DllPath,????OCI??,???????????ORACLE_HOME?? Oracle?????XCode.Oracle.IsUseOwner,???????????Ow...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.ClosedXML - The easy way to OpenXML: ClosedXML 0.60.0: Added almost full support for auto filters (missing custom date filters). See examples Filter Values, Custom Filters Fixed issues 7016, 7391, 7388, 7389, 7198, 7196, 7194, 7186, 7067, 7115, 7144Microsoft Research Boogie: Nightly builds: This download category contains automatically released nightly builds, reflecting the current state of Boogie's development. We try to make sure each nightly build passes the test suite. If you suspect that was not the case, please try the previous nightly build to see if that really is the problem. Also, please see the installation instructions.GoogleMap Control: GoogleMap Control 6.0: Major design changes to the control in order to achieve better scalability and extensibility for the new features comming with GoogleMaps API. GoogleMap control switched to GoogleMaps API v3 and .NET 4.0. GoogleMap control is 100% ScriptControl now, it requires ScriptManager to be registered on the pages where and before it is used. Markers, polylines, polygons and directions were implemented as ExtenderControl, instead of being inner properties of GoogleMap control. Better perfomance. Better...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: V2.1: Version 2.1 (click on the right) this uses V4.0 of .net Version 2.1 adds the following features: (apologize if I forget some, added a lot of little things) Manual Lookup with TV or Movie (finally huh!), you can look up a movie or TV episode directly, you can right click on anythign, and choose manual lookup, then will allow you to type anything you want to look up and it will assign it to the file you right clicked. No Rename: a very popular request, this is an option you can set so that t...SubExtractor: Release 1020: Feature: added "baseline double quotes" character to selector box Feature: added option to save SRT files as ANSI (instead of previous UTF-8 only) Feature: made "Save Sup files to Source directory" apply to both Sup and Idx source files. Fix: removed SDH text (...) or [...] that is split over 2 lines Fix: better decision-making in when to prefix a line with a '-' because SDH was removedAcDown????? - Anime&Comic Downloader: AcDown????? v3.6.1: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6.1?? ??.hlv...Track Folder Changes: Track Folder Changes 1.1: Fixed exception when right-clicking the root nodeKinect Paint: Kinect Paint 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Kinect Mouse Cursor: Kinect Mouse Cursor 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Coding4Fun Kinect Toolkit: Coding4Fun Kinect Toolkit 1.1: Updated for Kinect for Windows SDK v1.0 Beta 2!Async Executor: 1.0: Source code of the AsyncExecutorMedia Companion: MC 3.421b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Fix to show the season-specials.tbn when selecting an episode from season 00. Before, MC would try & load season00.tbn Fix for issue #197 - new show added by 'Manually Add Path' not being picked up. Also made non-visible the same thing in Root Folders...Nearforums - ASP.NET MVC forum engine: Nearforums v7.0: Version 7.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: UI: Flexible layout to handle both list and table-like template layouts. Theming - Visual choice of themes: Deliver some templates on installation, export/import functionality, preview. Allow site owners to choose default list sort order for the forums. Forum latest activity. Visit the project Roadmap for more details. Webdeploy packages sha1 checksum: e6bb913e591543ab292a753d1a16cdb779488c10?????????? - ????????: All-In-One Code Framework ??? 2011-11-02: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??????,11??,?????20????Microsoft OneCode Sample,????6?Program Language Sample,2?Windows Base Sample,2?GDI+ Sample,4?Internet Explorer Sample?6?ASP.NET Sample。?????????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Program Language CSImageFullScreenSlideShow VBImageFullScreenSlideShow CSDynamicallyBuildLambdaExpressionWithFie...Microsoft Media Platform: Player Framework: MMP Player Framework 2.6 (Silverlight and WP7): Additional DownloadsSMFv2.6 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Python Tools for Visual Studio: 1.1 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.1 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python programming language. This release includes new core IDE features, a couple of new sample libraries for interacting with Kinect and Excel, and many bug fixes for issues reported since the release of 1.0. For the core IDE features we’ve added many new features which improve the basic edit...New ProjectsBBCode Orchard module: Makes the usage of various BBCodes available by adding a new text flavor (bbcode)Binary to Bytes: Covert a binary file to a .byte declaration for assembler systems.BrainfuckOS: -- English -- An operating system developed with Cosmos in C# and runnning special brainfuck-code. -- Deutsch -- Ein mit Cosmos in C# entwickeltes Betriebssystem, welches spezialisierten Brainfuck-Code ausführt.Cardiff WPUG: This is a Windows Phone 7 app, written using C# and using the MVVMLight framework. This app is intended as a training app for anyone interested in programming for Windows Phone and looking for a place to start. The app is used to show upcoming user group meetings, showcase members apps and show the latest news from the Windows Phone User Group in the UK. I'll be keeping this app up to date with the latest framework and will also be submitting versions to the marketplace (the current ve...Chapters2Markers: A simple utility to convert from the chapters format used by mkvextract to the marker format used by Expression Encoder.Confirm Leave Orchard Module: Add a content part that when attached to types, displays a confirm message when the user wants to leave the page despite having modified content in the editor.cp2011project: cp2011 projectCP3046 Sample Project in Beijing: This is a sample for CP3046 ICT Project in Beijing.Current Cost plug-in for HouseBot: This plug-in will interface with cc128 current cost module and display in HouseBot automation software. Please note that it requires the c# wrapper to work found here: http://www.housebot.com/forums/viewtopic.php?f=4&t=856395Delete Inactive TS Ports: The presence of Inactive TS Ports causes systems to hang or become sluggish and unresponsive. Printer redirection also suffers when there are a lot of Inactive TS Ports presents in the machine.Demo Team Exploer: demoDNN Quick Form: DNN Quick Form is a module that makes creating simple contact forms extremely easy. This is designed for those that extra flexibility when creating a simple contact form. Build your own form in the environment of your choice. Use all the standard ASP.NET Form controls to build the perfect form. The initial release only supports emailing the form to designated users. REQUIRES DotNetNuke 6.1Duet Enterprise: Create external list item workflow activity for SPD: This workflow activity for SharePoint designer creates item in exteral list and set list item property values from activity properties. Initially I develop this workflow activity to use it with Duet Enterprise. It creates new SAP entity. This activity utilize BCS object model.EntGP: Este es proyecto es para gestionar proyectosFly2Cloud: fly2cloud use for upload all file in folder to cloud blob storage with '/' delimiters. Usage 1. open _Fly2Cloud.exe.config_ for edit _ACCOUNTNAME_ & _ACCESSKEY_ 2. excute -> fly2cloud x:\xxx containerFolders size WPF: Simple WPF application to graphical show the occupied space by files and folders.Jogo: Jogo das coresLast Light: Last LightmEdit: simple text editor with notes organized with help of tree-view control.M-Files Autotask Connector: Connector project to read data from Autotask to M-Files using the connections to external databases. Using the connector you can e.g. replicate accounts, contacts and projects from Autotask to M-Files.My KB: Just my KBmysoccerdata: Allow owner to collect soccer schedule as matches going on around the worldNGS: Sistema ERP para agronegócio.NTerm: An open source .NET based terminal emulator.PC Trabalho 1 .Net: PC Trabalho 1 .NetPega Topeira: Jogo desenvolvido como projeto final do curso de C# da UFSCar Sorocaba (2011).Sequence Quality Control Studio (SeQCoS): Sequence Quality Control Studio (SeQCoS) is an open source .NET software suite designed to perform quality control (QC) of massively parallel sequencing reads. It includes tools for evaluating sequence and base quality of reads, as well as a set of basic post-QC sequence manipulation tools. SeQCoS was written in C# and integrates functionality provided by .NET Bio and Sho libraries.series operation system: series operation system. contact:694643393 ??SharePoint MUI Resource File Helper: This simple Windows Application allows you to easily compare SharePoint Resource files, allowing you to easily replicate keys across various files.SSIS Business Rules Component: A SSIS Custom Component that allows for the development and application of complex and hierarchical business rules within the SSIS data flow.TestBox: testbox

    Read the article

  • .exe File becomes corrupted when downloaded from server

    - by Kerri
    Firstly: I'm a lowly web designer who knows just enough PHP to be dangerous and just enough about server administration to be, well, nothing. I probably won't understand you unless you're very clear! The setup: I've set up a website where the client uploads files to a specific directory, and those files are made available, through php, for download by users. The files are generally executable files over 50MB. The client does not want them zipped, as they feel their users aren't savvy enough to unzip them. I'm using the php below to force a download dialogue box and hide the directory where the files are located. It's Linux server, if that makes a difference. The problem: There is a certain file that becomes corrupt after the user tries to download it. It is an executable file, but when it's clicked on, a blank DOS window opens up. The original file, prior to download opens perfectly. There are several other similar files that go through the same exact download procedure, and all of those work just fine. Things I've tried: I've tried uploading the file zipped, then unzipping it on the server to make sure it wasn't becoming corrupt during upload, and no luck. I've also compared the binary code of the original file to the downloaded file that doesn't work, and their exactly the same (so the php isn't accidentally inserting anything extra into the file). Could it be an issue with the headers in my downloadFile function? I really am not sure how to troubleshoot this one… This is the download php, if it's relevant ($filenamereplace is defined elsewhere): downloadFile("../DOWNLOADS/files/$filenamereplace","$filenamereplace"); function downloadFile($file,$filename){ if(file_exists($file)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); @ flush(); readfile($file); exit; } }

    Read the article

  • Silverlight ViewBase in separate assembly - possible?

    - by Mark
    I have all my views in a project inheriting from a ViewBase class that inherits from UserControl. In my XAML I reference it thus: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Configure.Views" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" It works fine. Now I have moved the ViewBase to another project (so I can refernce it from multiple projects) so I reference it like: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" This works fine when I run from the IDE but when I run the same sln from MSBuild it gives a warning: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - H:\dev\ExternalCopy\Code\UI\Modules\Configure\Views\AddNewEmployee\AddNewEmployeeView.xaml(1,2,1,2): warning : The tag 'ViewBase' does not exist in XML namespace 'clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common'. Then fails with: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): error MSB4018: The "ValidateXaml" task failed unexpectedly.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at MS.MarkupCompiler.ValidationPass.ValidateXaml(String fileName, Assembly[] assemb lies, Assembly callingAssembly, TaskLoggingHelper log, Boolean shouldThrow)\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engin eProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) Any ideas what might be causing this behaviour? Using Silverlight 3 Here is a cut down version of the MSBuild file that fails to build the sln that builds fine in the IDE (sorry couldn't get it to format here): <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Compile"> <ItemGroup> <ProjectToBuild Include="..\UI\Forte.UI.sln"> <Properties>Configuration=Debug</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Compile"> <MSBuild Projects="@(ProjectToBuild)"></MSBuild> </Target> </Project> Thanks for any help!

    Read the article

  • Django, want to upload eather image (ImageField) or file (FileField)

    - by Serg
    I have a form in my html page, that prompts user to upload File or Image to the server. I want to be able to upload ether file or image. Let's say if user choose file, image should be null, and vice verso. Right now I can only upload both of them, without error. But If I choose to upload only one of them (let's say I choose image) I will get an error: "Key 'attachment' not found in <MultiValueDict: {u'image': [<InMemoryUploadedFile: police.jpg (image/jpeg)>]}>" models.py: #Description of the file class FileDescription(models.Model): TYPE_CHOICES = ( ('homework', 'Homework'), ('class', 'Class Papers'), ('random', 'Random Papers') ) subject = models.ForeignKey('Subjects', null=True, blank=True) subject_name = models.CharField(max_length=100, unique=False) category = models.CharField(max_length=100, unique=False, blank=True, null=True) file_type = models.CharField(max_length=100, choices=TYPE_CHOICES, unique=False) file_uploaded_by = models.CharField(max_length=100, unique=False) file_name = models.CharField(max_length=100, unique=False) file_description = models.TextField(unique=False, blank=True, null=True) file_creation_time = models.DateTimeField(editable=False) file_modified_time = models.DateTimeField() attachment = models.FileField(upload_to='files', blank=True, null=True, max_length=255) image = models.ImageField(upload_to='files', blank=True, null=True, max_length=255) def __unicode__(self): return u'%s' % (self.file_name) def get_fields(self): return [(field, field.value_to_string(self)) for field in FileDescription._meta.fields] def filename(self): return os.path.basename(self.image.name) def category_update(self): category = self.file_name return category def save(self, *args, **kwargs): if self.category is None: self.category = FileDescription.category_update(self) for field in self._meta.fields: if field.name == 'image' or field.name == 'attachment': field.upload_to = 'files/%s/%s/' % (self.file_uploaded_by, self.file_type) if not self.id: self.file_creation_time = datetime.now() self.file_modified_time = datetime.now() super(FileDescription, self).save(*args, **kwargs) forms.py class ContentForm(forms.ModelForm): file_name =forms.CharField(max_length=255, widget=forms.TextInput(attrs={'size':20})) file_description = forms.CharField(widget=forms.Textarea(attrs={'rows':4, 'cols':25})) class Meta: model = FileDescription exclude = ('subject', 'subject_name', 'file_uploaded_by', 'file_creation_time', 'file_modified_time', 'vote') def clean_file_name(self): name = self.cleaned_data['file_name'] # check the length of the file name if len(name) < 2: raise forms.ValidationError('File name is too short') # check if file with same name is already exists if FileDescription.objects.filter(file_name = name).exists(): raise forms.ValidationError('File with this name already exists') else: return name views.py if request.method == "POST": if "upload-b" in request.POST: form = ContentForm(request.POST, request.FILES, instance=subject_id) if form.is_valid(): # need to add some clean functions # handle_uploaded_file(request.FILES['attachment'], # request.user.username, # request.POST['file_type']) form.save() up_f = FileDescription.objects.get_or_create( subject=subject_id, subject_name=subject_name, category = request.POST['category'], file_type=request.POST['file_type'], file_uploaded_by = username, file_name=form.cleaned_data['file_name'], file_description=request.POST['file_description'], image = request.FILES['image'], attachment = request.FILES['attachment'], ) return HttpResponseRedirect(".")

    Read the article

  • Beginner - C# iteration through directory to produce a file list

    - by dassouki
    The end goal is to have some form of a data structure that stores a hierarchal structure of a directory to be stored in a txt file. I'm using the following code and so far, and I'm struggling with combining dirs, subdirs, and files. /// <summary> /// code based on http://msdn.microsoft.com/en-us/library/bb513869.aspx /// </summary> /// <param name="strFolder"></param> public static void TraverseTree ( string strFolder ) { // Data structure to hold names of subfolders to be // examined for files. Stack<string> dirs = new Stack<string>( 20 ); if ( !System.IO.Directory.Exists( strFolder ) ) { throw new ArgumentException(); } dirs.Push( strFolder ); while ( dirs.Count > 0 ) { string currentDir = dirs.Pop(); string[] subDirs; try { subDirs = System.IO.Directory.GetDirectories( currentDir ); } catch ( UnauthorizedAccessException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } catch ( System.IO.DirectoryNotFoundException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } string[] files = null; try { files = System.IO.Directory.GetFiles( currentDir ); } catch ( UnauthorizedAccessException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } catch ( System.IO.DirectoryNotFoundException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } // Perform the required action on each file here. // Modify this block to perform your required task. /* foreach ( string file in files ) { try { // Perform whatever action is required in your scenario. System.IO.FileInfo fi = new System.IO.FileInfo( file ); Console.WriteLine( "{0}: {1}, {2}", fi.Name, fi.Length, fi.CreationTime ); } catch ( System.IO.FileNotFoundException e ) { // If file was deleted by a separate application // or thread since the call to TraverseTree() // then just continue. MessageBox.Show( "Error: " + e.Message ); continue; } } */ // Push the subdirectories onto the stack for traversal. // This could also be done before handing the files. foreach ( string str in subDirs ) dirs.Push( str ); foreach ( string str in files ) MessageBox.Show( str ); }

    Read the article

  • Visual Studio Express 2012 debug mode doesn't work

    - by user2350086
    I have a project in Visual Studio that I have been working on for a while, and I have used the debugger extensively. Recently I changed some settings and I have lost the ability to stop the program and step through code. I can't figure out what I had changed that might have affected this. If I put a breakpoint in my code and try to have the program stop there, it doesn't. The break point shows up white with a red outline. If I hover the mouse over it, it says "The breakpoint will not currently be hit. No executable code of the debugger's target code type is associated with this line. Possible causes include: conditional compilation, compiler optimizations, or the target architecture of this line is not supported by the current debugger code type." I know for a fact that the program executes the code where the breakpoint is because I put the breakpoint in the beginning of the InitializeComponent method. The program displays the window fine, but does not stop at the breakpoint. Yes, I am running in debug mode. It seems as though there is a disconnect between the compiled code and the source code displayed. Does anyone know what that would be, or know which compiler settings I should check to re-enable debugging? Here are the compiler options: /GS /analyze- /W3 /Zc:wchar_t /I"D:\dev\libcurl-7.19.3-win32-ssl-msvc\include" /Zi /Od /sdl /Fd"Debug\vc110.pdb" /fp:precise /D "WIN32" /D "_DEBUG" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Oy- /clr /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\mscorlib.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Data.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Drawing.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.DataVisualization.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Xml.dll" /MDd /Fa"Debug\" /EHa /nologo /Fo"Debug\" /Fp"Debug\Prog.pch" The linker options are: /OUT:"D:\dev\Prog\Debug\Prog.exe" /MANIFEST /NXCOMPAT /PDB:"D:\dev\Prog\Debug\Prog.pdb" /DYNAMICBASE "curllib.lib" "winmm.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /FIXED:NO /DEBUG /MACHINE:X86 /ENTRY:"Main" /INCREMENTAL /PGD:"D:\dev\Prog\Debug\Prog.pgd" /SUBSYSTEM:WINDOWS /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"Debug\Prog.exe.intermediate.manifest" /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"D:\dev\libcurl-7.19.3-win32-ssl-msvc\lib\Debug" /ASSEMBLYDEBUG /TLBID:1

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • Using jQuery validation plugin with tabbed navigation

    - by user3438917
    I have a tabbed navigation wizard, for which the first section needs to be validated before proceeding to the next tab. The validation should trigger when the user hits the "next" button. I am unable to get the validation to trigger though: <form id="target-group" novalidate="novalidate"> <div class="box"> <div class='box-header-main'><h2><img src="assets/img/list.png" /> Target Group Information</h2></div> <br /> <div class='box'> <div class='box-header-property'><h2><span data-bind="text:Name">New Target Group</span> | <i class='fa fa-file'></i></h2></div> <br /> <div class='row'> <div id='flight-wizard'> <div id='content' class='col-lg-12'> <div class='col-lg-12'> <div id='tabs'> <ul> <li id="targetgroup-info-tab"><a href='#tabs-1'><i class="fa fa-info-circle"></i>Target Group Info</a></li> <li id="zone-tab"><a href='#tabs-2'><i class="fa fa-map-marker"></i>Zones</a></li> </ul> <div id='tabs-1'> <div class='row'> <div class='col-xs-6'> <div class='form-group'> Name<sup>*</sup> <input id="selectError0" name="name" class='form-control col-xs-12' data-bind="value: asdf" placeholder='Enter Name ...' /> </div> <form class='form-horizontal'> <div class='form-group'> Product(s)<sup>*</sup> <div class='controls' id='products'> <select id='selectError3' class='form-control' data-bind="options:test, optionsText: 'Name', optionsValue : 'test', value: test, optionsCaption: 'Choose Product...'"></select> </div> </div> </form> </div> <!--RIGHT PANE--> <div class='col-xs-6'> <div class='form-group'> Platform<sup>*</sup> <div class='controls'> <select id="selectError2" class='form-control' data-bind="options:test, optionsText: 'Name', optionsValue: 'test', value : test, optionsCaption: 'Choose Platform...'"></select> </div> </div> <form class='form-horizontal'> <div class='form-group'> AdTypes(s)<sup>*</sup> <div class='controls' id='adtypes'> <select multiple="" id='adtypesselect' class='form-control' data-rel="chosen" data-bind="options:test, optionsText: 'Name', optionsValue : 'test', selectedOptions: test, optionsCaption: 'test...'"></select> </div> </div> </form> <button id="btn_cancel_large" class='btn btn-large btn-primary btn-round'><i class='fa fa-ban' /></i> Cancel</button> <button id="btn-next-large" class='btn btn-large btn-primary btn-round'>Next <i class='fa fa-arrow-circle-right'></i></button> </div> <!--end of right pane--> </div> </div> <div id='tabs-2'> <div class='row'> <div class='col-lg-12'> <div class='row'> <div class='col-lg-12'> <div id='zones_list' class='box-content'> <div id='add-new-targetgroupzone' class='add-new'><i class='fa fa-plus-circle'></i><a href='/#/inventory/targeting/' onclick="return false;">Add Zone</a></div> <table id="results" width="100%"> <thead> <tr> <th>Publisher</th> <th>Property</th> <th>Zone</th> <th>AdTypes</th> <th width='10%'>Quick&nbsp;Actions</th> </tr> </thead> </table> </div> </div> </div> </div> </div> <br /> <div class="btn_row"> <button id="btn_cancel_large2" class='btn btn-large btn-primary btn-round'><i class='fa fa-ban' /></i> Cancel</button> <button id="btn-submit-large" class='btn btn-large btn-primary btn-round'>Submit <i class='fa fa-arrow-circle-down'></i></button> </div> </div> </div> </div> </div> </div> </div> </div> </div> </form> <form id="zones-form" style="display: none;" novalidate="novalidate" class="slideup-form"> <div class="box"> <div class="box-header-panel"> <h2>Add Target Group Zone</h2> <div class="box-icon" id="zones-form-close"> <i class="fa fa-arrow-circle-down"></i> </div> </div> <div class="box-content clearfix"> <div class="box-content"> <table id="zones-list" width="100%"> <thead> <tr> <th>Publisher</th> <th>Property</th> <th>Zone</th> <th>AdTypes</th> <th width='10%'>Quick&nbsp;Actions</th> </tr> </thead> </table> </div> </div> </div> </div> </form> jQuery: $("#target-group").validate({ rules: { name: { required: true } }, messages: { name: "Name required", } }); $('#btn-next-large').click(function () { if ($('#target-group').valid()) $tabs.tabs('select', $(this).attr("rel")); });

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • How to insert sub root node in xml file

    - by pravakar
    Hi guys hope all are doing good. I want to create one sub root node in my xml file like, <CapitalJobsList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <JobAds> -- element to create <JobAd> <AdvertiserDetails> <AdvertiserId>718508549</AdvertiserId> <AdvertiserName>ABC</AdvertiserName> </AdvertiserDetails> <ConsultantDetails> <ContactName>Naga Divakar</ContactName> <ContactPhone>6239 7755</ContactPhone> <ContactEmail>[email protected]</ContactEmail> <ContactFax>12345678912</ContactFax> </ConsultantDetails> <JobAdDetails> <DateEntered>2009-10-03T21:09:35.500</DateEntered> <AdvertiserJobRef>83754865</AdvertiserJobRef> <Title>IT Operations Manager</Title> <DescriptionShort>Large scale/exciting projects Mentor and manage o...</DescriptionShort> <Description>Large scale/exciting projects Mentor and manage others Management/technical mix This is a fantastic opportunity to join a high profile client who is active across both the commercial and Government domain. As the IT Operations Manager you will be responsible for leading and mentoring a small team of Infrastructure Engineers to ensure the availability and performance of the IT infrastructure. You w</Description> <SalaryMin>0.00</SalaryMin> <SalaryMax>0.00</SalaryMax> <WorkType xsi:nil="true" /> <Location>) as [JobAd/JobAdDetails/Bullets], isnull(Job</Location> <PostCode>2600</PostCode> <ClosingDate>2009-11-01T00:00:00</ClosingDate> <Keywords xsi:nil="true" /> <ApplyEmail xsi:nil="true" /> <ApplyURL>http://jobview.careerone.com.au/GetJob.aspx?JobID=83754865</ApplyURL> </JobAdDetails> <JobAdOptions> <BlindPost xsi:nil="true" /> <AdFormatType xsi:nil="true" /> <AdTemplateName xsi:nil="true" /> <ShowContactDetails xsi:nil="true" /> <ShowSalary xsi:nil="true" /> <HasVideo xsi:nil="true" /> <ResumeRequired>1</ResumeRequired> <ResidentsOnly>0</ResidentsOnly> </JobAdOptions> <CategoryList> <Category xsi:nil="true" /> </CategoryList> <RegionsList> <Region>ACT</Region> </RegionsList> <LevelsList> <Level xsi:nil="true" /> </LevelsList> </JobAd> <JobAd> <AdvertiserDetails> <AdvertiserId>718508549</AdvertiserId> <AdvertiserName>ABC</AdvertiserName> </AdvertiserDetails> <ConsultantDetails> <ContactName>Naga Divakar</ContactName> <ContactPhone>6239 7755</ContactPhone> <ContactEmail>[email protected]</ContactEmail> <ContactFax>12345678912</ContactFax> </ConsultantDetails> <JobAdDetails> <DateEntered>2009-10-03T21:09:35.530</DateEntered> <AdvertiserJobRef>83731488</AdvertiserJobRef> <Title>SAP Developers Required in Canberra - 12 month contract</Title> <DescriptionShort>My client, a large government department in Canbe...</DescriptionShort> <Description>My client, a large government department in Canberra, seeks two SAP Developers for 12 month ongoing contracts. Two SAP Developers Required Expert level ABAP programming skills Large SAP landscape - SAP R/3, SAP Web, SAP BI, SAP ITS My client, a large government department in Canberra, seeks two SAP Developers for 12 month ongoing contracts. My client is a large government department in Canberra, a</Description> <SalaryMin>0.00</SalaryMin> <SalaryMax>0.00</SalaryMax> <WorkType xsi:nil="true" /> <Location>) as [JobAd/JobAdDetails/Bullets], isnull(Job</Location> <PostCode>2600</PostCode> <ClosingDate>2009-11-01T00:00:00</ClosingDate> <Keywords xsi:nil="true" /> <ApplyEmail xsi:nil="true" /> <ApplyURL>http://jobview.careerone.com.au/GetJob.aspx?JobID=83731488</ApplyURL> </JobAdDetails> <JobAdOptions> <BlindPost xsi:nil="true" /> <AdFormatType xsi:nil="true" /> <AdTemplateName xsi:nil="true" /> <ShowContactDetails xsi:nil="true" /> <ShowSalary xsi:nil="true" /> <HasVideo xsi:nil="true" /> <ResumeRequired>1</ResumeRequired> <ResidentsOnly>0</ResidentsOnly> </JobAdOptions> <CategoryList> <Category xsi:nil="true" /> </CategoryList> <RegionsList> <Region>ACT</Region> </RegionsList> <LevelsList> <Level xsi:nil="true" /> </LevelsList> </JobAd> </JobAds> </CapitalJobsList> I have used the sql query for xml path like: select r.advid as [JobAd/AdvertiserDetails/AdvertiserId], CompanyName as [JobAd/AdvertiserDetails/AdvertiserName], firstname +'' ''+ lastname as [JobAd/ConsultantDetails/ContactName], WorkPhone as [JobAd/ConsultantDetails/ContactPhone], AdvEmail as [JobAd/ConsultantDetails/ContactEmail], FaxNo as [JobAd/ConsultantDetails/ContactFax], Job_CreatedDate as [JobAd/JobAdDetails/DateEntered], Job_Id as [JobAd/JobAdDetails/AdvertiserJobRef], Job_Title as [JobAd/JobAdDetails/Title], substring(Job_Description,0,50)+''...'' as [JobAd/JobAdDetails/DescriptionShort], Job_Description as [JobAd/JobAdDetails/Description], CONVERT(DECIMAL(10,2),MinSalary) as [JobAd/JobAdDetails/SalaryMin], CONVERT(DECIMAL(10,2),MaxSalary) as [JobAd/JobAdDetails/SalaryMax], Job_Type as [JobAd/JobAdDetails/WorkType], isnull(Job_Bullets,'') as [JobAd/JobAdDetails/Bullets], isnull(Job_Location,'') as [JobAd/JobAdDetails/Location], Job_PostCode as [JobAd/JobAdDetails/PostCode], Job_ExpireDate as [JobAd/JobAdDetails/ClosingDate], Job_Keywords as [JobAd/JobAdDetails/Keywords], ApplyEmail as [JobAd/JobAdDetails/ApplyEmail], Job_BrandURL+Job_Id as [JobAd/JobAdDetails/ApplyURL], BlindPost as [JobAd/JobAdOptions/BlindPost], AdFormatType as [JobAd/JobAdOptions/AdFormatType], AdTemplateName as [JobAd/JobAdOptions/AdTemplateName], ShowContactDetails as [JobAd/JobAdOptions/ShowContactDetails], ShowSalary as [JobAd/JobAdOptions/ShowSalary], HasVideo as [JobAd/JobAdOptions/HasVideo], ResumeRequired as [JobAd/JobAdOptions/ResumeRequired], ResidentsOnly as [JobAd/JobAdOptions/ResidentsOnly], Job_Category as [JobAd/CategoryList/Category], Job_Location_State as [JobAd/RegionsList/Region], [Level] as [JobAd/LevelsList/Level] from DR_Adv_Registration r, DR_CareerOne_ACTJobs j where r.Advid = j.Advid and job_location_city like(''%'+''+ @City +''+'%'') and job_location_state in('''+ @State +''') and job_status=1 for xml path(''''), Root(''CapitalJobsList''),ELEMENTS XSINIL So, suggest me how to get the sub root node. Thanks in advance

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • Parallel programming in C#

    - by Alxandr
    I'm interested in learning about parallel programming in C#.NET (not like everything there is to know, but the basics and maybe some good-practices), therefore I've decided to reprogram an old program of mine which is called ImageSyncer. ImageSyncer is a really simple program, all it does is to scan trough a folder and find all files ending with .jpg, then it calculates the new position of the files based on the date they were taken (parsing of xif-data, or whatever it's called). After a location has been generated the program checks for any existing files at that location, and if one exist it looks at the last write-time of both the file to copy, and the file "in its way". If those are equal the file is skipped. If not a md5 checksum of both files is created and matched. If there is no match the file to be copied is given a new location to be copied to (for instance, if it was to be copied to "C:\test.jpg" it's copied to "C:\test(1).jpg" instead). The result of this operation is populated into a queue of a struct-type that contains two strings, the original file and the position to copy it to. Then that queue is iterated over untill it is empty and the files are copied. In other words there are 4 operations: 1. Scan directory for jpegs 2. Parse files for xif and generate copy-location 3. Check for file existence and if needed generate new path 4. Copy files And so I want to rewrite this program to make it paralell and be able to perform several of the operations at the same time, and I was wondering what the best way to achieve that would be. I've came up with two different models I can think of, but neither one of them might be any good at all. The first one is to parallelize the 4 steps of the old program, so that when step one is to be executed it's done on several threads, and when the entire of step 1 is finished step 2 is began. The other one (which I find more interesting because I have no idea of how to do that) is to create a sort of worker and consumer model, so when a thread is finished with step 1 another one takes over and performs step 2 at that object (or something like that). But as said, I don't know if any of these are any good solutions. Also, I don't know much about parallel programming at all. I know how to make a thread, and how to make it perform a function taking in an object as its only parameter, and I've also used the BackgroundWorker-class on one occasion, but I'm not that familiar with any of them. Any input would be appreciated.

    Read the article

  • How do I add Objective C code to a FireBreath Project?

    - by jmort253
    I am writing a browser plugin for Mac OS that will place a status bar icon in the status bar, which users can use to interface with the browser plugin. I've successfully built a FireBreath 1.6 project in XCode 4.4.1, and can install it in the browser. However, FireBreath uses C++, whereas a large majority of the existing libraries for Mac OS are written in Objective C. In the /Mac/projectDef.make file, I added the Cocoa Framework and Foundation Framework, as suggested here and in other resources I've found on the Internet: target_link_libraries(${PROJECT_NAME} ${PLUGIN_INTERNAL_DEPS} ${Cocoa.framework} # added line ${Foundation.framework} # added line ) I reran prepmac.sh, expecting a new project to be created in XCode with my .mm files, and .m files; however, it seems that they're being ignored. I only see the .cpp and .h files. I added rules for those in the projectDef.make file, but it doesn't seem to make a difference: file (GLOB PLATFORM RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} Mac/[^.]*.cpp Mac/[^.]*.h Mac/[^.]*.m #added by me Mac/[^.]*.mm #added by me Mac/[^.]*.cmake ) Even if I add the files in manually, I get a series of compilation errors. There are about 20 of them, all related to the file NSObjRuntime.h file: Parse Issue - Expected unqualified-id Parse Issue - Unknown type name 'NSString' Semantic Issue - Use of undeclared identifier 'NSString' Parse Issue - Unknown type name 'NSString' ... ... Semantic Issue - Use of undeclared identifier 'aSelectorName' ... ... Semantic Issue - Use of undeclared identifier 'aClassName' ... It continues like this for some time with similar errors... From what I've read, these errors appear because of dependencies on the Foundation Framework, which I believe I've included in the project. I also tried clicking the project in XCode I'm to the point now where I'm not sure what to try next. People say it's not hard to use Objective C in C/C++ code, but being new to XCode and Objective C might contribute to my confusion. This is only day 4 for me in this new platform. What do I need to do to get XCode to compile the Objective C code? Please remember that I'm a little new to this, so I'd appreciate it if you leave detailed answers as opposed to the vague one-liners that are common in the firebreath tag. I'm just a little in over my head, but if you can get me past this hurdle I'm certain I'll be good to go from there. UPDATE: I edited projects/MyPlugin/CMakeLists.txt and added in the .m and .mm rules there too. after running prepmac.sh, the files are included in the project, but I still get the same compile errors. I moved all the .h files and .mm files from the Obj C code to the MyPlugin root folder and reran the prepmac.sh file. Problem still exists. Same compile errors.

    Read the article

  • get renamed file names of multiple upload form [js array] in codeigniter

    - by artmania
    Hi friends, I use codeigniter. I have a multiple image upload form. The code below is working well for uploading, but I also need to save file names to database. How can I get the names in here? I spent hours & hours :/ but could not sort it :/ Appreciate helps!!! uploadform.php echo form_open_multipart('gallery/upload'); <input type="file" name="photo" size="50" /> <input type="file" name="thumb" size="50" /> <input type="submit" value="Upload" /> </form> I have a controller between form view and model load model (of course : )) but didnt post here because of no need. gallery_model.php function multiple_upload($upload_dir = 'uploads/', $config = array()) { /* Upload */ $CI =& get_instance(); $files = array(); if(empty($config)) { $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = 'gif|jpg|jpeg|jpe|png'; $config['max_size'] = '2048'; } $CI->load->library('upload', $config); $errors = FALSE; foreach($_FILES as $key => $value) { if( ! empty($value['name'])) { if( ! $CI->upload->do_upload($key)) { $data['upload_message'] = $CI->upload->display_errors(ERR_OPEN, ERR_CLOSE); // ERR_OPEN and ERR_CLOSE are error delimiters defined in a config file $CI->load->vars($data); $errors = TRUE; } else { // Build a file array from all uploaded files $files[] = $CI->upload->data(); } } } // There was errors, we have to delete the uploaded files if($errors) { foreach($files as $key => $file) { @unlink($file['full_path']); } } elseif(empty($files) AND empty($data['upload_message'])) { $CI->lang->load('upload'); $data['upload_message'] = ERR_OPEN.$CI->lang->line('upload_no_file_selected').ERR_CLOSE; $CI->load->vars($data); } else { return $files; } /* ------------------------------- Insert to database */ // problem is here, i need file names to add db. // if there is already same names file at the folder, it rename file itself. so in such case, I need renamed file name :/ } }

    Read the article

  • html5 uploader + jquery drag & drop: how to store file data with FormData?

    - by lauthiamkok
    I am making a html5 drag and drop uploader with jquery, below is my code so far, the problem is that I get an empty array without any data. Is this line incorrect to store the file data - fd.append('file', $thisfile);? $('#div').on( 'dragover', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'dragenter', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'drop', function(e){ if(e.originalEvent.dataTransfer){ if(e.originalEvent.dataTransfer.files.length) { e.preventDefault(); e.stopPropagation(); // The file list. var fileList = e.originalEvent.dataTransfer.files; //console.log(fileList); // Loop the ajax post. for (var i = 0; i < fileList.length; i++) { var $thisfile = fileList[i]; console.log($thisfile); // HTML5 form data object. var fd = new FormData(); //console.log(fd); fd.append('file', $thisfile); /* var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }) */ $.ajax({ url: "upload.php", type: "POST", data: fd, processData: false, contentType: false, success: function(response) { // .. do something }, error: function(jqXHR, textStatus, errorMessage) { console.log(errorMessage); // Optional } }); } /*UPLOAD FILES HERE*/ upload(e.originalEvent.dataTransfer.files); } } } ); function upload(files){ console.log('Upload '+files.length+' File(s).'); }; then if I use another method is that to make the file data into an array inside the jquery code, var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }); but where is the tmp_name data inside e.originalEvent.dataTransfer.files[i]? php, print_r($_POST); $uploaddir = './uploads/'; $file = $uploaddir . basename($_POST['file']['name']); if (move_uploaded_file($_POST['file']['tmp_name'], $file)) { echo "success"; } else { echo "error"; } as you can see that tmp_name is needed to upload the file via php... html, <div id="div">Drop here</div>

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >