Search Results

Search found 27989 results on 1120 pages for 'junior software developer'.

Page 436/1120 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Integrate flex 3.5 projects in flash builder 4 beta 2

    - by Cyrill Zadra
    Hi I'm currently using Flex Builder 3 and Flex SDK 3.5 for my projects. But I'd like to try out the new Flash Builder 4. So I downloaded and installed the new software, configured all the additional software like subversion, server adapter .. and finally a importet my 2 projects. 1) Main Project (includes a swc generated by the Library Project) (flex sdk 3.5) 2) Library Project (flex sdk 3.4) After the import and project cleanup the project is running perfectly. But as soon as I replace the existing LibraryProject.swc through a new one (compiled with flash builder 4 beta 2 sdk 3.4) VerifyError: Error #1014: class mx.containers::Canvas not found. VerifyError: Error #1014: class mx.containers::HBox not found. VerifyError: Error #1014: class IWatcherSetupUtil not found. ... and several others not found errors. Does anyone has the same error. How can I get my project running again? thanks & regards cyrill

    Read the article

  • ~/.irbrc not executed when starting irb or script/console

    - by Patrick Klingemann
    Here's what I've tried: 1. gem install awesome_print 2. echo "require 'ap'" >> ~/.irbrc 3. chmod u+x ~/.irbrc 4. script/console 5. ap { :test => 'value' } Result: NameError: undefined local variable or method `ap' for # Some additional info: Fedora 13 (observed this issues in prior versions of Fedora also) bash --version Produces: GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.

    Read the article

  • C or C++ to write a compiler?

    - by H.Josef
    I want to write a compiler for a custom markup language, I want to get optimum performance and I also want to have a good scalable design. Multi-paradigm programming language (C++) is more suitable to implement modern design patterns, but I think that will degrade performance a little bit (think of RTTI for example) which more or less might make C a better choice. I wonder what is the best language (C, C++ or even objective C) if someone wants to create a modern compiler (in the sense of complying to modern software engineering principles as a software) that is fast, efficient, and well designed.

    Read the article

  • Disable debug output in libxml2 and xmlsec

    - by ereOn
    Hi, In my software, I use libxml2 and xmlsec to manipulate (obviously) XML data structures. I mainly use XSD schema validation and so far, it works well. When the data structure input by the client doesn't match the XSD schema, libxml2 (or xmlsec) output some debug strings to the console. Here is an example: Entity: line 1: parser error : Start tag expected, '<' not found DUMMY<?xml ^ While those strings are useful for debugging purposes, I don't want them to appear and polute the console output in the released software. So far, I couldn't find an official way of doing this. Do you know how to suppress the debug output or (even better) to redirect it to a custom function ? Many thanks.

    Read the article

  • How to get users to read error messages?

    - by FX
    If you program for a nontechnical audience, you find yourself at a high risk that users will not read your carefully worded and enlightening error messages, but just click on the first button available with a shrug of frustration. So, I'm wondering what good practices you can recommend to help users actually read your error message, instead of simply waiving it aside. Ideas I can think of would fall along the lines of: Formatting of course help; maybe a simple, short message, with a "learn more" button that leads to the longer, more detailed error message Have all error messages link to some section of the user guide (somewhat difficult to achieve) Just don't issue error messages, simply refuse to perform the task (a somewhat "Apple" way of handling user input) Edit: the audience I have in mind is a rather broad user base that doesn't use the software too often and is not captive (i.e., not an in-house software or narrow community). A more generic form of this question was asked on slashdot, so you may want to check there for some of the answers.

    Read the article

  • Installing Mapnik 2.2.0 in windows 7 with Python 2.7

    - by Joan Natalie
    I've been trying to install mapnik on my computer for hours but what i always get when I import mapnik is ImportError: DLL load failed: The specified procedure could not be found. I'm using Windows 7. The currently installed software is Geoserver from Opengeo suite. Here is my path %SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\Java\jre7\bin;C:\Program Files\Java\jdk1.7.0_45\bin;C:\Python27;C:\mapnik-v2.2.0\lib My python path: C:\Python27;C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk;C:\Program Files\ArcGIS\bin;C:\\mapnik-v2.2.0\python\2.7\site-packages\;C:\mapnik-v2.2.0\bin\;

    Read the article

  • How can I reprogram a USB "easy button"?

    - by Shawn Simon
    I have a USB "easy button"- is a USB cable attached to a large button. It appears as a keyboard to the computer. When I push the button, it sends the keys Start+R and then quickly types in a pre-configured URL. I am fairly certain that the company that produces these buttons sets the URL via some sort of software over USB. How could I reprogram the button myself? What sort of software would I need? Here is a link to the website: http://www.usbsmartbuttons.com/

    Read the article

  • Are there any famous one-man-army programmers?

    - by DFectuoso
    Lately I have been learning of more and more programmers who think that if they were working alone, they would be faster and would deliver more quality. Usually that feeling is attached to a feeling that they do the best programming in their team and at the end of the day the idea is quite plausible. If they ARE doing the best programming, and worked alone (and more maybe) the final result would be a better piece of software. I know this idea would only work if you where enough passionate to work 24/7, on a deadline, and great discipline. So after considering the idea and trying to learn a little more, I wonder if there are famous one-man-army programmers that have delivered any (useful) software in the past?

    Read the article

  • Testing harness for online teaching?

    - by candeira
    I have been asked to teach an online programming course, and I am looking for a test harness especially geared to education. Some students will have significant coding experience, but others will be total newbies. The course is an introduction to software development, mostly taught in C with some C++ and Java thrown in. In any case, I would like to read their source code only after a test suite has made sure that it compiles and executes properly. The students will also benefit from having a tool they can check their code against before submitting it. However, the Learning Management System my employer is using doesn't have such a system. Do you know of any LMS software that includes this feature? Which testing harness would you recommend in case I have to roll my own?

    Read the article

  • Do you know any alternative to NDepend for architects?

    - by ifesdjeen
    Hi! do you know any software similar to NDepend? I've got it just recently, and found it very useful. It helped me a lot, but for now i don't have a possibility to buy a proffessional version. So, is there any alternative (maybe, open-source)? Preferrably, free. But not necessarily. Maybe, with a little bit more fitting price for a single-developer, not a team. Requirements for this software: Build dependency diagrams Retrieve code metrics Display comments coverage (so far)

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • CodePlex Daily Summary for Wednesday, August 06, 2014

    CodePlex Daily Summary for Wednesday, August 06, 2014Popular ReleasesRecaptcha for .NET: Recaptcha for .NET v1.6.0: What's New?Bug fixes Optimized codeMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.4.2: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.11: Virto Commerce Community Edition version 1.11. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes many bug fixes and minor improvements. More details about this release can be found on our blog at http://blog.virtocommerce.com.Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...VidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Version Control Guide (ex-Branching & Merging): v3.0 - Visual Studio 2013 (Spanish): Important: This download has been created using ALM Ranger bits by the community, for the community. Although ALM Rangers were involved in the process, the content has not been through their quality review. Please post your candid feedback and improvement suggestions to the Community tab of this Codeplex project. Translated by: Juan María Laó Ramos See ¿Habla Español? … Testing Unitario con Microsoft® Fakes http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/08/22/191-habla-espa-241-ol...Windows forms generator: Windows forms generator beta 2: Second beta release of windows forms generator. Have some basic configuration and can handle basic types. Supported types: int string double float long decimal short bool List<E> Vector2 (Microsoft.Xna.Framework, new in beta 2) Fixed bugs in beta 2: Problem with nested classes and list Known bugs: Form height sometimes get weird (fix it by using form attribute on class) Can't create a form with just a listSharePoint Real Time Log Viewer: SharePoint Real Time Log Viewer - Source: Source codeModern Audio Tagger: Modern Audio Tagger 1.0.0.0: Modern Audio Tagger is bornQuickMon: Version 3.20: Added a 'Directory Services Query' collector agent. This collector allows for querying Active Directory using simple DirectorySearcher queries. Note: The current implementation only supports 'LDAP://' related path queries. In a future release support for other 'Providers' will be added as well.Grunndatakvalitet: Initial working: Show Altinn metadata in Excel. To get a live list you need to run the sql script on a server and update the connection string in ExcelMultiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Drop and Create indexes SSIS Task: Assembly and Setup files: This zip contains the task assembly and setup executable's. Click here for the Installation GuideNew ProjectsDestiny of an Emperor: game cocos2d-x sanguoGameCastleVania: Game th?y dungOrchard Application Host: The Orchard Application Host is a portable environment that lets you run your application (not just web apps) inside Orchard (http://orchardproject.net).Orchard Application Host Sample: Sample project for the Orchard Application Host (https://orchardapphost.codeplex.com/).Orchard SSEO Module: Orchard Social & Search Engine Optimization ModulePhoenix Office 365 User Group Website: The Phoenix Office 365 User Group meets once per month in the Phoenix area and facilitates networking and learning around the Office 365 platform. Thingtory ( Inventory of Things ): The Thingtory Framework allows to collect data (inventory) from all kind of "things" (can be a Computer, a Phone or your Fridge ).xRM CI Framework: The xRM Continuous Integration (CI) Framework is a set of tools that makes it easy and quick to automate the builds and deployment of your CRM components.

    Read the article

  • Best practices for managing limited client licenses/login

    - by MicSim
    I have a multi-user software solution (containing different applications, i.e. EXEs) that should allow only a limited number of concurrent users. It's designed to run in an intranet. I dont have a really good, satisfactory solution to the problem of counting the client licenses yet. The key requirements are: Multiple instances (starts) of the same application (= process) should count as only one client licence Starting different applications of the software solution should also count as only one (the same) client licence Application crash should not lead to orphaned used licences The above should work also for Terminal Server environments (all clients same IP, but different install folders) I'm looking for estabilished patterns, solutions, tips for managing used client licenses. Specific hints for the above sitaution are also welcome.

    Read the article

  • Stress Test tool for Password Protected Website

    - by Jason
    We need to run a stress test on a password protection section of a website we host. What tool (paid or free) would be best for us to use for this? We'd like to be able to create several 'scripts' and then have the stress test simulate X number of users. Each script will have us login as a specific user and then click on some links and submit forms to simulate an actual user. Ideally the software would also create some nice data exports/charts. Server is a linux web server, but we could run this on linux or Windows so software that will run on either is fine.

    Read the article

  • WAN Optimization Resources

    - by Paul
    I'm looking for resources on writing software to do WAN optimization. I've googled this and searched SO, but there doesn't seem to be much around. The only things I've found are high-level articles in tech magazines, and info for network admins on how to make use of existing WAN optimization products. I'm looking for something on the techniques etc. used to write WAN optimization software. It seems to be a dark art, and the people who know how to do it, guard their secrets closely. Any suggestions?

    Read the article

  • .NET StandardInput Sending Modifiers

    - by Paul Oakham
    We have some legacy software which depends on sending keystrokes to a DOS window and then scraping the screen. I am trying to re-create the software by redirecting the input and output streams of the process directly to my application. This part I have managed fine using: _Process = new Process(); { _Process.StartInfo.FileName = APPLICATION; _Process.StartInfo.RedirectStandardOutput = true; _Process.StartInfo.RedirectStandardInput = true; _Process.StartInfo.RedirectStandardError = true; _Process.StartInfo.UseShellExecute = false; _Process.StartInfo.CreateNoWindow = true; _Process.OutputDataReceived += new DataReceivedEventHandler(_Process_OutputDataReceived); _Process.ErrorDataReceived += new DataReceivedEventHandler(_Process_ErrorDataReceived); } My problem is I need to send some command modifiers such as Ctrl, ALT and Space as well as F1-12 to this process but am unsure how. I can send basic text and I receive response's fine. I just need to emulate these modifiers.

    Read the article

  • Handling User Authentication in C#.NET?

    - by Daniel
    I am new to .NET, and don't have much experience in programming. What is the standard way of handling user authentication in .NET in the following situation? 1.In Process A, User inputs ID/Password 2.Process A sends the ID/Password to Process B over a nonsecure public channel. 3.Process B authenticates the user with the recieved ID/Password what are some of the standard cryptographic algorithms I can use in above model? The users(customers that bought my company's software) will be running the software(Process A) locally in their computer(connected to internet). I need to authenticate the users so that only registered users can run the program. Thank You!

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Subclipse plugin doesn't work in Eclipse?

    - by blackicecube
    Hi, even though there was no error when installing Subclipse in Eclipse. I won't see the SVN perspective at all? I have tried with "Eclipse Classic 3.5.1" and with "Eclipse for PHP Developers". After downloading and unzipping the packages I used Eclipse's "Install Software" mechanism to install Subclipse 1.6.x. I followed the steps described here: http://www3.math.tu-berlin.de/jreality/mediawiki/index.php/Subclipse_installation_in_eclipse_galileo. But after Eclipse re-starts I don't get any SVN Repository perspective? I have tried to un-install/re-install all the software components many times now. Finally after 3 hours of trying I am giving up. Does anyone have any hint what I am missing? Thanks! Peter

    Read the article

  • XNA, WPF light show visualizer

    - by Bgnt44
    HI all, I'm developing a software to control light show ( through DMX protocol ), i use C# and wpf to develop my main software (.net 4.0) To help people preview their show, i would like to make a live 3D visualizer... First, i thought that i could use wpf 3D to make the visualizer, but i need to work with light .. My main application should send property ( beam angle, orientation (X,Y), position(X,Y), Brush ( color,shape,effect)) to the 3D visualizer But i would like to be able to move light (position in the scene) by mouse during execution and had value in return... So .. Does XNA is the easiest way to doing that ? Can you help me for that : Generating light (orientation , bitmap like filter in front of light ) Dynamically moving object with mouse and get position in return Dynamically add or remove fixture All of your advice, sample, example are very welcome ... I don't espect to have a perfect result at the first time but i need to understand the main concepts for doing that Thank You !!

    Read the article

  • Passenger not booting Rails App

    - by firecall
    I'm at the end of ability, so time to ask for help. My hosting company are moving me to a new server. I've got my own VPS. It's a fresh CentOS 5 install with Plesk 9.5.2 Essentially Passenger just doesnt seem to be booting the Rails app. It's like it doesnt see it's a Rails app to be booted. I've got Rails 3.0 install with Ruby 1.9.2 built from source. I can run Bundle Install and that works. I've currently got Passenger 3 RC1 installed as per here, but have tried v2 as well. My conf/vhost.conf file looks like this: DocumentRoot /var/www/vhosts/foosite.com.au/httpdocs/public/ RackEnv development #Options Indexes I've got a /etc/httpd/conf.d/passenger.conf file which looks like this: LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0.pre4/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0.pre4 PassengerRuby /usr/local/bin/ruby PassengerLogLevel 2 and all I get is a 403 forbidden or the directory listing if I enable Indexes. I dont know what else to do! Yikes. There's nothing in the Apache error log that I can see. The new server admin isnt much help as I think he's a bit junior and says he doesnt know about Rails... sigh :/ I'm a programmer and server admin isnt my bag :(

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >