Search Results

Search found 88928 results on 3558 pages for 'code scenarios'.

Page 47/3558 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Do I suffer from encapsulation overuse?

    - by Florenc
    I have noticed something in my code in various projects that seems like code smell to me and something bad to do, but I can't deal with it. While trying to write "clean code" I tend to over-use private methods in order to make my code easier to read. The problem is that the code is indeed cleaner but it's also more difficult to test (yeah I know I can test private methods...) and in general it seems a bad habit to me. Here's an example of a class that reads some data from a .csv file and returns a group of customers (another object with various fields and attributes). public class GroupOfCustomersImporter { //... Call fields .... public GroupOfCustomersImporter(String filePath) { this.filePath = filePath; customers = new HashSet<Customer>(); createCSVReader(); read(); constructTTRP_Instance(); } private void createCSVReader() { //.... } private void read() { //.... Reades the file and initializes the class attributes } private void readFirstLine(String[] inputLine) { //.... Method used by the read() method } private void readSecondLine(String[] inputLine) { //.... Method used by the read() method } private void readCustomerLine(String[] inputLine) { //.... Method used by the read() method } private void constructGroupOfCustomers() { //this.groupOfCustomers = new GroupOfCustomers(**attributes of the class**); } public GroupOfCustomers getConstructedGroupOfCustomers() { return this.GroupOfCustomers; } } As you can see the class has only a constructor which calls some private methods to get the job done, I know that's not a good practice not a good practice in general but I prefer to encapsulate all the functionality in the class instead of making the methods public in which case a client should work this way: GroupOfCustomersImporter importer = new GroupOfCustomersImporter(filepath) importer.createCSVReader(); read(); GroupOfCustomer group = constructGoupOfCustomerInstance(); I prefer this because I don't want to put useless lines of code in the client's side code bothering the client class with implementation details. So, Is this actually a bad habit? If yes, how can I avoid it? Please note that the above is just a simple example. Imagine the same situation happening in something a little bit more complex.

    Read the article

  • Code refactoring with Visual Studio 2010-Part 3

    - by Jalpesh P. Vadgama
    I have been writing few post about Code refactoring features of visual studio 2010 and This blog post is also one of them. In this post I am going to show you reorder parameters features in visual studio 2010. As a developer you might need to reorder parameter of a method or procedure in code for better readability of the the code and if you do this task manually then it is tedious job to do. But Visual Studio Reorder Parameter code refactoring feature can do this stuff within a minute. So let’s see how its works. For this I have created a simple console application which I have used earlier posts . Following is a code for that. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(firstName, lastName); } private static void PrintMyName(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } Above code is very simple. It just print a firstname and lastname via PrintMyName method. Now I want to reorder the firstname and lastname parameter of PrintMyName. So for that first I have to select method and then click Refactor Menu-> Reorder parameters like following. Once you click a dialog box appears like following where it will give options to move parameter with arrow navigation like following. Now I am moving lastname parameter as first parameter like following. Once you click OK it will show a preview option where I can see the effects of changes like following. Once I clicked Apply my code will be changed like following. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(lastName, firstName); } private static void PrintMyName(string lastName, string firstName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } As you can see its very easy to use this feature. Hoped you liked it.. Stay tuned for more.. Till that happy programming.

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • Using runtime checking of code contracts in Visual Studio 2010

    - by DigiMortal
    In my last posting about code contracts I introduced how to check input parameters of randomizer using static contracts checking. But you can also compile code contracts to your assemblies and use them also in runtime. In this posting I will show you simple example about runtime checking of code contracts. NB! If you want to play with code and try out things described here feel free to download example solution. if you are speaker and want to use this solution as a part of your sessions then feel free to do so, but don’t forget to refer me and this blog as source of this solution. And please let me know about your session. As a speaker I am very interested about it. :) To see how code contracts are checked at runtime we have to enable runtime checking from project properties. Make sure you have checked the box “Perform Runtime Contract Checking” and make sure you select “Full” from dropdown. These parts are in red box on the screenshot below. Visual Studio 2010 settings for code contracts. Runtime Checking is turned on and checks are made only in public surface. Click on image to see it at original size.  Save project settings. Then compile code and run it. As soon as code execution hits the call to GetRandomFromRangeContracted() exception is thrown. If you are not currently playing with solution referred above take a look at the following screenshot. Visual Studio 2010 runtime checking of code contracts. Exception of type ContractException is thrown when contract is violated. Click on image to see it at original size.  The exact type of exception is ContractException and it is defined in System.Diagnostics.Contracts.__ContractsRuntime namespace. In our example the message of exception is following: "Precondition failed: min < max  Min must be less than max" Besides the description we inserted for the case contract violation the message also contains violated contract type. In this case the type of contract is Precondition. Conclusion Using runtime checking of code contracts enables you to take code contracts with your code and have them checked every time when your methods are called. This way you can assure that all conditions are met to run method or exception is thrown and calling system has to handle the situation.

    Read the article

  • Code Coverage for Maven Integrated in NetBeans IDE 7.2

    - by Geertjan
    In NetBeans IDE 7.2, JaCoCo is supported natively, i.e., out of the box, as a code coverage engine for Maven projects, since Cobertura does not work with JDK 7 language constructs. (Although, note that Cobertura is supported as well in NetBeans IDE 7.2.) It isn't part of NetBeans IDE 7.2 Beta, so don't even try there; you need some development build from after that. I downloaded the latest development build today. To enable JaCoCo features in NetBeans IDE, you need do no different to what you'd do when enabling JaCoCo in Maven itself, which is rather wonderful. In both cases, all you need to do is add this to the "plugins" section of your POM: <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.5.7.201204190339</version> <executions> <execution> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> Now you're done and ready to examine the code coverage of your tests, whether they are JUnit or TestNG. At this point, i.e., for no other reason than that you added the above snippet into your POM, you will have a new Code Coverage menu when you right-click on the project node: If you click Show Report above, the Code Coverage Report window opens. Here, once you've run your tests, you can actually see how many classes have been covered by your tests, which is pretty useful since 100% tests passing doesn't mean much when you've only tested one class, as you can see very graphically below: Then, when you click the bars in the Code Coverage Report window, the class under test is shown, with the methods for which tests exist highlighted in green and those that haven't been covered in red: (Note: Of course, striving for 100% code coverage is a bit nonsensical. For example, writing tests for your getters and setters may not be the most useful way to spend one's time. But being able to measure, and visualize, code coverage is certainly useful regardless of the percentage you're striving to achieve.) Best of all about all this is that everything you see above is available out of the box in NetBeans IDE 7.2. Take a look at what else NetBeans IDE 7.2 brings for the first time to the world of Maven: http://wiki.netbeans.org/NewAndNoteworthyNB72#Maven

    Read the article

  • Google I/O 2012 - Optimizing Your Code Using Features of Google APIs

    Google I/O 2012 - Optimizing Your Code Using Features of Google APIs Sven Mawson Google APIs support a variety of features designed to enable state of the art development. In this session, you will learn how to create applications that use performance enhancing features to make your code run faster and use fewer resources. Some features we'll describe include batching, requests for partial response, and efficient ways to handle media. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 44:50 More in Science & Technology

    Read the article

  • VS2010 Code Analysis, any way to automatically fix certain warnings?

    - by JL
    I must say I really like the new code analysis with VS 2010, I have a lot of areas in my code where I am not using CultureInfo.InvariantCultureand code analysis is warming me about this. I am pretty sure I want to use CultureInfo.InvariantCulturewhere ever code analysis has detected it is missing on Convert.ToString operations. Is there anyway to get VS to automatically fix warnings of this type?

    Read the article

  • Google Jam 2009. C. Welcome to Code Jam. Can't understand Dynamic programming

    - by vibneiro
    The original link of the problem is here: https://code.google.com/codejam/contest/90101/dashboard#s=p2&a=2 In simple words we need to find how many times the string S="welcome to code jam" appears as a sub-sequence of given string S, e.g. S="welcome to code jam" T="wweellccoommee to code qps jam" I know the theory but not good at DP in practice. Would you please explain step-by-step process to solve this DP problem on example and why it works?

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Weekend reading: Microsoft/Oracle and SkyDrive based code-editor

    - by jamiet
    A couple of news item caught my eye this weekend that I think are worthy of comment. Microsoft/Oracle partnership to be announced tomorrow (24/06/2013) According to many news site Microsoft and Oracle are about to announce a partnership (Oracle set for major Microsoft, Salesforce, Netsuite partnerships) and they all seem to be assuming that it will be something to do with “the cloud”. I wouldn’t disagree with that assessment, Microsoft are heavily pushing Azure and Oracle seem (to me anyway) to be rather lagging behind in the cloud game. More specifically folks seem to be assuming that Oracle’s forthcoming 12c database release will be offered on Azure. I did a bit of reading about Oracle 12c and one of its key pillars appears to be that it supports multi-tenant topologies and multi-tenancy is a common usage scenario for databases in the cloud. I’m left wondering then, if Microsoft are willing to push a rival’s multi-tenant solution what is happening to its own cloud-based multi-tenant offering – SQL Azure Federations. We haven’t heard anything about federations for what now seems to be a long time and moreover the main Program Manager behind the technology, Cihan Biyikoglu, recently left Microsoft to join Twitter. Furthermore, a Principle Architect for SQL Server, Conor Cunningham, recently presented the opening keynote at SQLBits 11 where he talked about multi-tenant solutions on SQL Azure and not once did he mention federations. All in all I don’t have a warm fuzzy feeling about the future of SQL Azure Federations so I hope that that question gets asked at some point following the Microsoft/Oracle announcement. Text Editor on SkyDrive with coding-specific features Liveside.net got a bit of a scoop this weekend with the news (Exclusive: SkyDrive.com to get web-based text file editing features) that Microsoft’s consumer-facing file storage service is going to get a new feature – a web-based code editor. Here’s Liveside’s screenshot: I’ve long had a passing interest in online code editors, indeed back in December 2009 I wondered out loud on this blog site: I started to wonder when the development tools that we use would also become cloud-based. After all, if we’re using cloud-based services does it not make sense to have cloud-based tools that work with them? I think it does. Project Houston Since then the world has moved on. Cloud 9 IDE (https://c9.io/) have blazed a trail in the fledgling world of online code editors and I have been wondering when Microsoft were going to start playing catch-up. I had no doubt that an online code editor was in Microsoft’s future; its an obvious future direction, why would I want to have to download and install a bloated text editor (which, arguably, is exactly what Visual Studio amounts to) and have to continually update it when I can simply open a web browser and have ready access to all of my code from wherever I am. There are signs that Microsoft is already making moves in this direction, after all the URL for their new offering Team Foundation Service doesn’t mention TFS at all – my own personalised URL for Team Foundation Service is http://jamiet.visualstudio.com – using “Visual Studio” as the domain name for a service that isn’t strictly speaking part of Visual Studio leads me to think that there’s a much bigger play here and that one day http://visualstudio.com will house an online code editor. With that in mind then I find Liveside’s revelation rather intriguing, why would a code editing tool show up in Skydrive? Perhaps SkyDrive is going to get integrated more tightly into TFS, I’m very interested to see where this goes. The larger question playing on my mind though is whether an online code editor from Microsoft will support SQL Server developers. I have opined before (see The SQL developer gap) about the shoddy treatment that SQL Server developers have to experience from Microsoft and I haven’t seen any change in Microsoft’s attitude in the three and a half years since I wrote that post. I’m constantly bewildered by the lack of investment in SQL Server developer productivity compared to the riches that are lavished upon our appdev brethren. When you consider that SQL Server is Microsoft’s third biggest revenue stream it is, frankly, rather insulting. SSDT was a step in the right direction but the hushed noises I hear coming out of Microsoft of late in regard to SSDT don’t bode fantastically well for its future. So, will an online code editor from Microsoft support T-SQL development? I have to assume not given the paucity of investment on us lowly SQL Server developers over the last few years, but I live in hope! Your thoughts in the comments section please. I would be very interested in reading them. @Jamiet

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • Method flags as arguments or as member variables?

    - by Martin
    I think the title "Method flags as arguments or as member variables?" may be suboptimal, but as I'm missing any better terminology atm., here goes: I'm currently trying to get my head around the problem of whether flags for a given class (private) method should be passed as function arguments or via member variable and/or whether there is some pattern or name that covers this aspect and/or whether this hints at some other design problems. By example (language could be C++, Java, C#, doesn't really matter IMHO): class Thingamajig { private ResultType DoInternalStuff(FlagType calcSelect) { ResultType res; for (... some loop condition ...) { ... if (calcSelect == typeA) { ... } else if (calcSelect == typeX) { ... } else if ... } ... return res; } private void InteralStuffInvoker(FlagType calcSelect) { ... DoInternalStuff(calcSelect); ... } public void DoThisStuff() { ... some code ... InternalStuffInvoker(typeA); ... some more code ... } public ResultType DoThatStuff() { ... some code ... ResultType x = DoInternalStuff(typeX); ... some more code ... further process x ... return x; } } What we see above is that the method InternalStuffInvoker takes an argument that is not used inside this function at all but is only forwarded to the other private method DoInternalStuff. (Where DoInternalStuffwill be used privately at other places in this class, e.g. in the DoThatStuff (public) method.) An alternative solution would be to add a member variable that carries this information: class Thingamajig { private ResultType DoInternalStuff() { ResultType res; for (... some loop condition ...) { ... if (m_calcSelect == typeA) { ... } ... } ... return res; } private void InteralStuffInvoker() { ... DoInternalStuff(); ... } public void DoThisStuff() { ... some code ... m_calcSelect = typeA; InternalStuffInvoker(); ... some more code ... } public ResultType DoThatStuff() { ... some code ... m_calcSelect = typeX; ResultType x = DoInternalStuff(); ... some more code ... further process x ... return x; } } Especially for deep call chains where the selector-flag for the inner method is selected outside, using a member variable can make the intermediate functions cleaner, as they don't need to carry a pass-through parameter. On the other hand, this member variable isn't really representing any object state (as it's neither set nor available outside), but is really a hidden additional argument for the "inner" private method. What are the pros and cons of each approach?

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Creating New Scripts Dynamically in Lua

    - by bazola
    Right now this is just a crazy idea that I had, but I was able to implement the code and get it working properly. I am not entirely sure of what the use cases would be just yet. What this code does is create a new Lua script file in the project directory. The ScriptWriter takes as arguments the file name, a table containing any arguments that the script should take when created, and a table containing any instance variables to create by default. My plan is to extend this code to create new functions based on inputs sent in during its creation as well. What makes this cool is that the new file is both generated and loaded dynamically on the fly. Theoretically you could get this code to generate and load any script imaginable. One use case I can think of is an AI that creates scripts to map out it's functions, and creates new scripts for new situations or environments. At this point, this is all theoretical, though. Here is the test code that is creating the new script and then immediately loading it and calling functions from it: function Card:doScriptWriterThing() local scriptName = "ScriptIAmMaking" local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() local loadedScript = require (scriptName) local scriptInstance = loadedScript:new("sayThis") print(scriptInstance:get_name()) --will print test print(scriptInstance:get_one()) -- will print 1 scriptInstance:set_one(10000) print(scriptInstance:get_one()) -- will print 10000 print(scriptInstance:get_argumentName()) -- will print sayThis scriptInstance:set_argumentName("saySomethingElse") print(scriptInstance:get_argumentName()) --will print saySomethingElse end Here is ScriptWriter.lua local ScriptWriter = {} local twoSpaceIndent = " " local equalsWithSpaces = " = " local newLine = "\n" --scriptNameToCreate must be a string --argumentsForNew and instanceVariablesToCreate must be tables and not nil function ScriptWriter:new(scriptNameToCreate, argumentsForNew, instanceVariablesToCreate) local instance = setmetatable({}, { __index = self }) instance.name = scriptNameToCreate instance.newArguments = argumentsForNew instance.instanceVariables = instanceVariablesToCreate instance.stringList = {} return instance end function ScriptWriter:makeFileForLoadedSettings() self:buildInstanceMetatable() self:buildInstanceCreationMethod() self:buildSettersAndGetters() self:buildReturn() self:writeStringsToFile() end --very first line of any script that will have instances function ScriptWriter:buildInstanceMetatable() table.insert(self.stringList, "local " .. self.name .. " = {}" .. newLine) table.insert(self.stringList, newLine) end --every script made this way needs a new method to create its instances function ScriptWriter:buildInstanceCreationMethod() --new() function declaration table.insert(self.stringList, ("function " .. self.name .. ":new(")) self:buildNewArguments() table.insert(self.stringList, ")" .. newLine) --first line inside :new() function table.insert(self.stringList, twoSpaceIndent .. "local instance = setmetatable({}, { __index = self })" .. newLine) --add designated arguments inside :new() self:buildNewArgumentVariables() --create the instance variables with the loaded values for key,value in pairs(self.instanceVariables) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. key .. equalsWithSpaces .. value .. newLine) end --close the :new() function table.insert(self.stringList, twoSpaceIndent .. "return instance" .. newLine) table.insert(self.stringList, "end" .. newLine) table.insert(self.stringList, newLine) end function ScriptWriter:buildNewArguments() --if there are arguments for :new(), add them for key,value in ipairs(self.newArguments) do table.insert(self.stringList, value) table.insert(self.stringList, ", ") end if next(self.newArguments) ~= nil then --makes sure the table is not empty first table.remove(self.stringList) --remove the very last element, which will be the extra ", " end end function ScriptWriter:buildNewArgumentVariables() --add the designated arguments to :new() for key, value in ipairs(self.newArguments) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. value .. equalsWithSpaces .. value .. newLine) end end --the instance variables need separate code because their names have to be the key and not the argument name function ScriptWriter:buildSettersAndGetters() for key,value in ipairs(self.newArguments) do self:buildArgumentSetter(value) self:buildArgumentGetter(value) table.insert(self.stringList, newLine) end for key,value in pairs(self.instanceVariables) do self:buildInstanceVariableSetter(key, value) self:buildInstanceVariableGetter(key, value) table.insert(self.stringList, newLine) end end --code for arguments passed in function ScriptWriter:buildArgumentSetter(variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. variable .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. variable .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildArgumentGetter(variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. variable .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. variable .. newLine) table.insert(self.stringList, "end" .. newLine) end --code for instance variable values passed in function ScriptWriter:buildInstanceVariableSetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. key .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. key .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildInstanceVariableGetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. key .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. key .. newLine) table.insert(self.stringList, "end" .. newLine) end --last line of any script that will have instances function ScriptWriter:buildReturn() table.insert(self.stringList, "return " .. self.name) end function ScriptWriter:writeStringsToFile() local fileName = (self.name .. ".lua") file = io.open(fileName, 'w') for key,value in ipairs(self.stringList) do file:write(value) end file:close() end return ScriptWriter And here is what the code provided will generate: local ScriptIAmMaking = {} function ScriptIAmMaking:new(argumentName) local instance = setmetatable({}, { __index = self }) instance.argumentName = argumentName instance.name = 'test' instance.one = 1 return instance end function ScriptIAmMaking:set_argumentName(newValue) self.argumentName = newValue end function ScriptIAmMaking:get_argumentName() return self.argumentName end function ScriptIAmMaking:set_name(newValue) self.name = newValue end function ScriptIAmMaking:get_name() return self.name end function ScriptIAmMaking:set_one(newValue) self.one = newValue end function ScriptIAmMaking:get_one() return self.one end return ScriptIAmMaking All of this is generated with these calls: local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() I am not sure if I am correct that this could be useful in certain situations. What I am looking for is feedback on the readability of the code, and following Lua best practices. I would also love to hear whether this approach is a valid one, and whether the way that I have done things will be extensible.

    Read the article

  • HttpWebRequest: How to find a postal code at Canada Post through a WebRequest with x-www-form-enclos

    - by Will Marcouiller
    I'm currently writing some tests so that I may improve my skills with the Internet interaction through Windows Forms. One of those tests is to find a postal code which should be returned by Canada Post website. My default URL setting is set to: http://www.canadapost.ca/cpotools/apps/fpc/personal/findByCity?execution=e4s1 The required form fields are: streetNumber, streetName, city, province The contentType is "application/x-www-form-enclosed" EDIT: Please consider the value "application/x-www-form-encoded" instead of point 3 value as the contentType. (Thanks EricLaw-MSFT!) The result I get is not the result expected. I get the HTML source code of the page where I could manually enter the information to find the postal code, but not the HTML source code with the found postal code. Any idea of what I'm doing wrong? Shall I consider going the XML way? Is it first of all possible to search on Canada Post anonymously? Here's a code sample for better description: public static string FindPostalCode(ICanadadianAddress address) { var postData = string.Concat(string.Format("&streetNumber={0}", address.StreetNumber) , string.Format("&streetName={0}", address.StreetName) , string.Format("&city={0}", address.City) , string.Format("&province={0}", address.Province)); var encoding = new ASCIIEncoding(); byte[] postDataBytes = encoding.GetBytes(postData); request = (HttpWebRequest)WebRequest.Create(DefaultUrlSettings); request.ImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Anonymous; request.Container = new CookieContainer(); request.Timeout = 10000; request.ContentType = contentType; request.ContentLength = postDataBytes.LongLength; request.Method = @"post"; var senderStream = new StreamWriter(request.GetRequestStream()); senderStream.Write(postDataBytes, 0, postDataBytes.Length); senderStream.Close(); string htmlResponse = new StreamReader(request.GetResponse().GetResponseStream()).ReadToEnd(); return processedResult(htmlResponse); // Processing the HTML source code parsing, etc. } I seem stuck in a bottle neck in my point of view. I find no way out to the desired result. EDIT: There seems to have to parameters as for the ContentType of this site. Let me explain. There's one with the "meta"-variables which stipulates the following: meta http-equiv="Content-Type" content="application/xhtml+xml, text/xml, text/html; charset=utf-8" And another one later down the code that is read as: form id="fpcByAdvancedSearch:fpcSearch" name="fpcByAdvancedSearch:fpcSearch" method="post" action="/cpotools/apps/fpc/personal/findByCity?execution=e1s1" enctype="application/x-www-form-urlencoded" My question is the following: With which one do I have to stick? Let me guess, the first ContentType is to be considered as the second is only for another request to a function or so when the data is posted? EDIT: As per request, the closer to the solution I am is listed under this question: WebRequest: How to find a postal code using a WebRequest against this ContentType=”application/xhtml+xml, text/xml, text/html; charset=utf-8”? Thanks for any help! :-)

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • How do I generate C# code from WADL files?

    - by Anders Sandvig
    I am looking for a code generator than can generate C# code to access RESTful web services described by WADL files in a way similar to how wadl2java works. Doing som searching I came across the rest-api-code-gen project on Google Code, but although the latest source does in fact support C#, the REST Describe & Compile demo site does not. (The C# button is there, but it's disabled.) I realize I could download the source and set up my own server with the latest version, but I would prefer not to, as what I need is a command line tool and not a web application with dependencies to Google Web Toolkit. I guess I could write my own a command line tool based on the same source code, but if it has already been done, or other tools can do the job, I'd rather avoid it. So, I'm wondering, are there any tools like that out there?

    Read the article

  • Is there a way to mark up code to tell ReSharper not to format it?

    - by adrianbanks
    I quite often use the ReSharper "Clean Up Code" command to format my code to our coding style before checking it into source control. This works well in general, but some bits of code are better formatted manually (eg. because of the indenting rules in ReSharper, things like chained linq methods or multi-line ternary operators have a strange indent that pushes them way to the right). Is there any way to mark up parts of a file to tell ReSharper not to format that area? I'm hoping for some kind of markup similar to how ReSharper suppresses other warnings/features. If not, is there some way of changing a combination of settings to get ReSharper to format the indenting correctly? EDIT: I have found this post from the ReSharper forums that says that generated code sections (as defined in the ReSharper options page) are ignored in code cleanup. Having tried it though, it doesn't seem to get ignored.

    Read the article

  • How to properly rewrite ASSERT code to pass /analyze in msvc?

    - by Sorin Sbarnea
    Visual Studio added code analysis (/analyze) for C/C++ in order to help identify bad code. This is quite a nice feature but when you deal with and old project you may be overwhelmed by the number of warnings. Most of the problems are generating because the old code is doing some ASSERT at the beginning of the method or function. I think this is the ASSERT definition used in the code (from afx.h) #define ASSERT(f) DEBUG_ONLY((void) ((f) || !::AfxAssertFailedLine(THIS_FILE, __LINE__) || (AfxDebugBreak(), 0))) Example code: ASSERT(pBytes != NULL); *pBytes = 0; // <- warning C6011: Dereferencing NULL pointer 'pBytes' I'm looking for an easy and safe solution to solve these warnings that does not imply disabling these warnings. Did I mention that there are lots of occurrences in current codebase?

    Read the article

  • Typical size of unit tests compared to test code

    - by Frank Schwieterman
    I'm curious what a reasonable / typical value is for the ratio of test code to production code when people are doing TDD. Looking at one component, I have 530 lines of test code for 130 lines of production code. Another component has 1000 lines of test code for 360 lines of production code. So the unit tests are requiring roughly 3x to 5x as much code. This is for Javascript code. I don't have much tested C# code handy, but I think for another project I was looking at 2x to 3x as much test code then production code. It would seem to me that the lower that value is, assuming the tests are sufficient, would reflect higher quality tests. Pure speculation, I just wonder what ratios other people see. I know lines of code is an loose metric, but since I code in the same style for both test and production (same spacing format, same ammount of comments, etc) the values are comparable.

    Read the article

  • How can I reuse my javascript code between client and server?

    - by Chris Farmer
    I have some javascript code that includes an ANTLR-generated lexer and parser, and some associated syntax tree evaluation functionality. This code runs in the browser in my web app to support users who author code snippets which process scientific data. Now I'd like to do some additional background processing on the server using the same generated parser. I would prefer not to have to re-implement this stuff in C# and have multiple bits of code that did the exact same thing. Performance isn't as critical to me as eliminating duplication, since this is a background process. So, how can I call into my javascript code from C#? And how can I format my script so that it plays nicely with my .NET web app?

    Read the article

  • C# Code Smells. What are the most common and how to fix them?

    - by Carlos Loth
    Hi, One thing I'd like to hear from you guys is what are the code smells that you see most often happen on C# code and how to fix them. I'm working on defining a code-review process for my team and I'd like to include a section with the most common C# code smells and a possible fix for each of them. I'd like the assistance of the community on that, so others can reuse this list for further reference. What are the most common smells you have seen on C# code? How did you fix them?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >