Search Results

Search found 2916 results on 117 pages for 'prototype chain'.

Page 111/117 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Box2D blocky map. Body, Fixtures a huge map and performance

    - by Solom
    Right now I'm still in the planning phase of a my very first game. I'm creating a "Minecraft"-like game in 2D that features blocks that can be destroyed as well as players moving around the map. For creating the map I chose a 2D-Array of Integers that represent the Block ID. For testing purposes I created a huge map (16348 * 256) and in my prototype that didn't use Box2D everything worked like a charm. I only rendered those blocks that where within the bounds of my camera and got 60 fps straight. The problem started when I decided to use an existing physics-solution rather than implementing my own one. What I had was basically simple hitboxes around the blocks and then I had to manually check if the player collided with any of those in his neighborhood. For more advanced physics as well as the collision detection I want to switch over to Box2D. The problem I have right now is ... how to go about the bodies? I mean, the blocks are of a static bodytype. They don't move on their own, they just are there to be collided with. But as far as I can see it, every block needs his own body with a rectangular fixture attached to it, so as to be destroyable. But for a huge map such as mine, this turns out to be a real performance bottle-neck. (In fact even a rather small map [compared to the other] of 1024*256 is unplayable.) I mean I create thousands of thousands of blocks. Even if I just render those that are in my immediate neighborhood there are hundreds of them and (at least with the debugRenderer) I drop to 1 fps really quickly (on my own "monster machine"). I thought about strategies like creating just one body, attaching multiple fixtures and only if a fixture got hit, separate it from the body, create a new one and destroy it, but this didn't turn out quite as successful as hoped. (In fact the core just dumps. Ah hello C! I really missed you :X) Here is the code: public class Box2DGameScreen implements Screen { private World world; private Box2DDebugRenderer debugRenderer; private OrthographicCamera camera; private final float TIMESTEP = 1 / 60f; // 1/60 of a second -> 1 frame per second private final int VELOCITYITERATIONS = 8; private final int POSITIONITERATIONS = 3; private Map map; private BodyDef blockBodyDef; private FixtureDef blockFixtureDef; private BodyDef groundDef; private Body ground; private PolygonShape rectangleShape; @Override public void show() { world = new World(new Vector2(0, -9.81f), true); debugRenderer = new Box2DDebugRenderer(); camera = new OrthographicCamera(); // Pixel:Meter = 16:1 // Body definition BodyDef ballDef = new BodyDef(); ballDef.type = BodyDef.BodyType.DynamicBody; ballDef.position.set(0, 1); // Fixture definition FixtureDef ballFixtureDef = new FixtureDef(); ballFixtureDef.shape = new CircleShape(); ballFixtureDef.shape.setRadius(.5f); // 0,5 meter ballFixtureDef.restitution = 0.75f; // between 0 (not jumping up at all) and 1 (jumping up the same amount as it fell down) ballFixtureDef.density = 2.5f; // kg / m² ballFixtureDef.friction = 0.25f; // between 0 (sliding like ice) and 1 (not sliding) // world.createBody(ballDef).createFixture(ballFixtureDef); groundDef = new BodyDef(); groundDef.type = BodyDef.BodyType.StaticBody; groundDef.position.set(0, 0); ground = world.createBody(groundDef); this.map = new Map(20, 20); rectangleShape = new PolygonShape(); // rectangleShape.setAsBox(1, 1); blockFixtureDef = new FixtureDef(); // blockFixtureDef.shape = rectangleShape; blockFixtureDef.restitution = 0.1f; blockFixtureDef.density = 10f; blockFixtureDef.friction = 0.9f; } @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); debugRenderer.render(world, camera.combined); drawMap(); world.step(TIMESTEP, VELOCITYITERATIONS, POSITIONITERATIONS); } private void drawMap() { for(int a = 0; a < map.getHeight(); a++) { /* if(camera.position.y - (camera.viewportHeight/2) > a) continue; if(camera.position.y - (camera.viewportHeight/2) < a) break; */ for(int b = 0; b < map.getWidth(); b++) { /* if(camera.position.x - (camera.viewportWidth/2) > b) continue; if(camera.position.x - (camera.viewportWidth/2) < b) break; */ /* blockBodyDef = new BodyDef(); blockBodyDef.type = BodyDef.BodyType.StaticBody; blockBodyDef.position.set(b, a); world.createBody(blockBodyDef).createFixture(blockFixtureDef); */ PolygonShape rectangleShape = new PolygonShape(); rectangleShape.setAsBox(1, 1, new Vector2(b, a), 0); blockFixtureDef.shape = rectangleShape; ground.createFixture(blockFixtureDef); rectangleShape.dispose(); } } } @Override public void resize(int width, int height) { camera.viewportWidth = width / 16; camera.viewportHeight = height / 16; camera.update(); } @Override public void hide() { dispose(); } @Override public void pause() { } @Override public void resume() { } @Override public void dispose() { world.dispose(); debugRenderer.dispose(); } } As you can see I'm facing multiple problems here. I'm not quite sure how to check for the bounds but also if the map is bigger than 24*24 like 1024*256 Java just crashes -.-. And with 24*24 I get like 9 fps. So I'm doing something really terrible here, it seems and I assume that there most be a (much more performant) way, even with Box2D's awesome physics. Any other ideas? Thanks in advance!

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • Mod_Rewrite w Apache mod_jrun22.so & ColdFusion 9 on cPanel

    - by Eddie B
    How can I utilize mod_rewrite at either the httpd.conf level or per-directory level when mod_jrun22 seems to have short-stopped the rewrite process for ColdFusion pages? I have a ColdFusion 9 based site running on Centos 5.8 w cPanel. cPanel uses EasyApache 3 to manage virtual host containers and as such the conf for mod_jrun22.so, /usr/local/apache/conf/includes/pre_main_global.conf, is loaded prior to the main httpd.conf with the domain specific rules for the container. My assertion is that .cfm pages are failing to be rewritten due to the mod_jk22.so module having priority in the directive chain. To note, I also have a WordPress blog in the site where the rewrites appear to be working fine. For example the following code to remove the index file works fine for php and fails with cfm ... .htaccess under /blog/ : This works Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /blog/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] </IfModule> .htaccess under / : This does not work as expected. Apache serves the page. ASSERT: This would redirect to domain.com/ without index.cfm Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.cfm$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.cfm [L] </IfModule> .htaccess under / : This works I'm presuming this is working because the redirect is to another .cfm page and a 404 handler in Application.cfc ... Options -Indexes -Multiviews <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^.*\.cfm$ - [L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{ENV:REDIRECT_STATUS} =404 RewriteRule . /404.cfm$ [L] </IfModule> I've attempted numerous different methods to rewrite .cfm urls ... Adding [PT], [L], [R], [NS], Moving the script to Directory blocks under httpd.conf --- all with the same results ... either the rewrite doesn't work or Apache crashes in an endless loop ... Any help would be greatly appreciated. Below is a single-visit rewrite log snippet for a request to /index.cfm ... the pass-through is taking effect before the rewrite ... cat rewrite_dump_mod | grep index.cfm [perdir /home/foo/public_html/] strip per-dir prefix: /home/foo/public_html/index.cfm -> index.cfm [perdir /home/foo/public_html/] applying pattern '^.*\.cfm$' to uri 'index.cfm' [perdir /home/foo/public_html/] pass through /home/foo/public_html/index.cfm [perdir /home/foo/public_html/] strip per-dir prefix: /home/foo/public_html/index.cfm -> index.cfm [perdir /home/foo/public_html/] applying pattern '^.*\.cfm$' to uri 'index.cfm' [perdir /home/foo/public_html/] pass through /home/foo/public_html/index.cfm * UPDATE * I've managed to figure this out ... it took a while ... Options -Indexes -Multiviews +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^.*/index\.cfm RewriteRule ^(.*)index.cfm http://%{HTTP_HOST}/$1 [R=301,L] </IfModule>

    Read the article

  • Why am I unable to telnet to a local port that has a listening service?

    - by Skip Huffman
    I suspect this is either a very simple question, or a very complex one. I have a headless server running ubuntu 10.04 that I can ssh into. I have full root access to the system. I am trying to set up an ssh tunnel to allow me to vnc to the system (but that isn't my question. I have vnc running on port 5903, here is the netstat output for that: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5903 0.0.0.0:* LISTEN 7173/Xtightvnc tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 465/sshd But when I try to telnet to that port, from within the same system and login, I get unable to connect errors # telnet localhost 5903 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out I am able to telnet to port 22 (as a verification) ~# telnet localhost 22 Trying ::1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 I have tried to open up any possible ports using ufw (probably clumsy fashion) # ufw status numbered Status: active To Action From -- ------ ---- [ 1] 5903 ALLOW IN Anywhere [ 2] 22 ALLOW IN Anywhere What else might be blocking this connection locally? Thank you, Edit: The only reference to port 5903 in iptable -L -n is this: Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5903 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5903 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:8080 I can post the whole output if that will be useful. hosts.allow and hosts.deny both contain only comments. Re-Edit: Some other questions pointed me to nmap, so I ran a portscan through that utility: # nmap -v -sT localhost -p1-65535 Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-09 09:58 PST NSE: Loaded 0 scripts for scanning. Warning: Hostname localhost resolves to 2 IPs. Using 127.0.0.1. Initiating Connect Scan at 09:58 Scanning localhost (127.0.0.1) [65535 ports] Discovered open port 22/tcp on 127.0.0.1 Connect Scan Timing: About 18.56% done; ETC: 10:01 (0:02:16 remaining) Connect Scan Timing: About 44.35% done; ETC: 10:00 (0:01:17 remaining) Completed Connect Scan at 10:00, 112.36s elapsed (65535 total ports) Host localhost (127.0.0.1) is up (0.00s latency). Interesting ports on localhost (127.0.0.1): Not shown: 65533 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 112.43 seconds Raw packets sent: 0 (0B) | Rcvd: 0 (0B) I think this shows that 5903 is blocked somehow. Which I pretty much knew. The question remains what is blocking it and how to modify. Re-re-edit: To check Paul Lathrop's suggested answer, I first verified my ip address with ifconfig: eth0 Link encap:Ethernet HWaddr 02:16:3e:42:28:8f inet addr:10.0.10.3 Bcast:10.0.10.255 Mask:255.255.255.0 Then tried to telnet to 5903 from that address: # telnet 10.0.10.3 5903 Trying 10.0.10.3... telnet: Unable to connect to remote host: Connection timed out No luck. Re-re-re-re-edit: Ok, I think I have isolated it a bit to vncserver, not the firewall, darn it. I shut off vncserver and had netcat listen on port 5903. My vnc client then was able to establish a connnection and sit and wait for a response. Looks like I should be chasing a vnc problem. At least that is progress Thanks for the help

    Read the article

  • User Input That Involves A ' ' Causes A Substring Out Of Range Error

    - by Greenhouse Gases
    Hi Stackoverflow people. You have already helped me quite a bit but near the end of writing this program I have somewhat of a bug. You see in order to read in city names with a space in from a text file I use a '/' that is then replaced by the program for a ' ' (and when the serializer runs the opposite happens for next time the program is run). The problem is when a user inputs a name too add, search for, or delete that contains a space, for instance 'New York' I get a Debug Assertion Error with a substring out of range expression. I have a feeling it's to do with my correctCase function, or setElementsNull that looks at the string until it experiences a null element in the array, however ' ' is not null so I'm not sure how to fix this and I'm going a bit insane. Any help would be much appreciated. Here is my code: // U08221.cpp : main project file. #include "stdafx.h" #include <_iostream> #include <_string> #include <_fstream> #include <_cmath> using namespace std; class locationNode { public: string nodeCityName; double nodeLati; double nodeLongi; locationNode* Next; locationNode(string nameOf, double lat, double lon) { this->nodeCityName = nameOf; this->nodeLati = lat; this->nodeLongi = lon; this->Next = NULL; } locationNode() // NULL constructor { } void swapProps(locationNode *node2) { locationNode place; place.nodeCityName = this->nodeCityName; place.nodeLati = this->nodeLati; place.nodeLongi = this->nodeLongi; this->nodeCityName = node2->nodeCityName; this->nodeLati = node2->nodeLati; this->nodeLongi = node2->nodeLongi; node2->nodeCityName = place.nodeCityName; node2->nodeLati = place.nodeLati; node2->nodeLongi = place.nodeLongi; } void modify(string name) { this->nodeCityName = name; } void modify(double latlon, int mod) { switch(mod) { case 2: this->nodeLati = latlon; break; case 3: this->nodeLongi = latlon; break; } } void correctCase() // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = this->nodeCityName[0], letVal; int n = 1; // variable for name index from second letter onwards if((this->nodeCityName[0] >90) && (this->nodeCityName[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter this->nodeCityName[0] = firstLetVal; } while(this->nodeCityName[n] != NULL) { if((this->nodeCityName[n] >= 65) && (this->nodeCityName[n] <= 90)) { if(this->nodeCityName[n - 1] != 32) { letVal = this->nodeCityName[n] + 32; this->nodeCityName[n] = letVal; } } n++; } } }; Here is the main part of the program: // U08221.cpp : main project file. #include "stdafx.h" #include "Locations2.h" #include <_iostream> #include <_string> #include <_fstream> #include <_cmath> using namespace std; #define pi 3.14159265358979323846264338327950288 #define radius 6371 #define gig 1073741824 //size of a gigabyte in bytes int n = 0,x, locationCount = 0, MAX_SIZE = 35 , g = 0, i = 0, modKey = 0, xx; string cityNameInput, alter; char targetCity[35], skipKey = ' '; double lat1, lon1, lat2, lon2, dist, dummy, modVal, result; bool acceptedInput = false, match = false, nodeExists = false;// note: addLocation(), set to true to enable user input as opposed to txt file locationNode *temp, *temp2, *example, *seek, *bridge, *start_ptr = NULL; class Menu { int junction; public: /* Convert decimal degrees to radians */ public: void setElementsNull(char cityParam[]) { int y=0; while(cityParam[y] != NULL) { y++; } while(y < MAX_SIZE) { cityParam[y] = NULL; y++; } } void correctCase(string name) // Correct upper and lower case letters of input { int MAX_SIZE = 35; int firstLetVal = name[0], letVal; int n = 1; // variable for name index from second letter onwards if((name[0] >90) && (name[0] < 123)) // First letter is lower case { firstLetVal = firstLetVal - 32; // Capitalise first letter name[0] = firstLetVal; } while(name[n] != NULL) { if((name[n] >= 65) && (name[n] <= 90)) { letVal = name[n] + 32; name[n] = letVal; } n++; } for(n = 0; targetCity[n] != NULL; n++) { targetCity[n] = name[n]; } } bool nodeExistTest(char targetCity[]) // see if entry is present in the database { match = false; seek = start_ptr; int letters = 0, letters2 = 0, x = 0, y = 0; while(targetCity[y] != NULL) { letters2++; y++; } while(x <= locationCount) // locationCount is number of entries currently in list { y=0, letters = 0; while(seek->nodeCityName[y] != NULL) // count letters in the current name { letters++; y++; } if(letters == letters2) // same amount of letters in the name { y = 0; while(y <= letters) // compare each letter against one another { if(targetCity[y] == seek->nodeCityName[y]) { match = true; y++; } else { match = false; y = letters + 1; // no match, terminate comparison } } } if(match) { x = locationCount + 1; //found match so terminate loop } else{ if(seek->Next != NULL) { bridge = seek; seek = seek->Next; x++; } else { x = locationCount + 1; // end of list so terminate loop } } } return match; } double deg2rad(double deg) { return (deg * pi / 180); } /* Convert radians to decimal degrees */ double rad2deg(double rad) { return (rad * 180 / pi); } /* Do the calculation */ double distance(double lat1, double lon1, double lat2, double lon2, double dist) { dist = sin(deg2rad(lat1)) * sin(deg2rad(lat2)) + cos(deg2rad(lat1)) * cos(deg2rad(lat2)) * cos(deg2rad(lon1 - lon2)); dist = acos(dist); dist = rad2deg(dist); dist = (radius * pi * dist) / 180; return dist; } void serialise() { // Serialize to format that can be written to text file fstream outfile; outfile.open("locations.txt",ios::out); temp = start_ptr; do { for(xx = 0; temp->nodeCityName[xx] != NULL; xx++) { if(temp->nodeCityName[xx] == 32) { temp->nodeCityName[xx] = 47; } } outfile << endl << temp->nodeCityName<< " "; outfile<<temp->nodeLati<< " "; outfile<<temp->nodeLongi; temp = temp->Next; }while(temp != NULL); outfile.close(); } void sortList() // do this { int changes = 1; locationNode *node1, *node2; while(changes > 0) // while changes are still being made to the list execute { node1 = start_ptr; node2 = node1->Next; changes = 0; do { xx = 1; if(node1->nodeCityName[0] > node2->nodeCityName[0]) //compare first letter of name with next in list { node1->swapProps(node2); // should come after the next in the list changes++; } else if(node1->nodeCityName[0] == node2->nodeCityName[0]) // if same first letter { while(node1->nodeCityName[xx] == node2->nodeCityName[xx]) // check next letter of name { if((node1->nodeCityName[xx + 1] != NULL) && (node2->nodeCityName[xx + 1] != NULL)) // check next letter until not the same { xx++; } else break; } if(node1->nodeCityName[xx] > node2->nodeCityName[xx]) { node1->swapProps(node2); // should come after the next in the list changes++; } } node1 = node2; node2 = node2->Next; // move to next pair in list } while(node2 != NULL); } } void initialise() { cout << "Populating List..."; ifstream inputFile; inputFile.open ("locations.txt", ios::in); char inputName[35] = " "; double inputLati = 0, inputLongi = 0; //temp = new locationNode(inputName, inputLati, inputLongi); do { inputFile.get(inputName, 35, ' '); inputFile >> inputLati; inputFile >> inputLongi; if(inputName[0] == 10 || 13) //remove linefeed from input { for(int i = 0; inputName[i] != NULL; i++) { inputName[i] = inputName[i + 1]; } } for(xx = 0; inputName[xx] != NULL; xx++) { if(inputName[xx] == 47) // if it is a '/' { inputName[xx] = 32; // replace it for a space } } temp = new locationNode(inputName, inputLati, inputLongi); if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list } while(!inputFile.eof()); cout << "Successful!" << endl << "List contains: " << locationCount << " entries" << endl; inputFile.close(); cout << endl << "*******************************************************************" << endl << "DISTANCE CALCULATOR v2.0\tAuthors: Darius Hodaei, Joe Clifton" << endl; } void menuInput() { char menuChoice = ' '; while(menuChoice != 'Q') { // Menu if(skipKey != 'X') // This is set by case 'S' below if a searched term does not exist but wants to be added { cout << endl << "*******************************************************************" << endl; cout << "Please enter a choice for the menu..." << endl << endl; cout << "(P) To print out the list" << endl << "(O) To order the list alphabetically" << endl << "(A) To add a location" << endl << "(D) To delete a record" << endl << "(C) To calculate distance between two points" << endl << "(S) To search for a location in the list" << endl << "(M) To check memory usage" << endl << "(U) To update a record" << endl << "(Q) To quit" << endl; cout << endl << "*******************************************************************" << endl; cin >> menuChoice; if(menuChoice >= 97) { menuChoice = menuChoice - 32; // Turn the lower case letter into an upper case letter } } skipKey = ' '; //Reset skipKey so that it does not skip the menu switch(menuChoice) { case 'P': temp = start_ptr; // set temp to the start of the list do { if (temp == NULL) { cout << "You have reached the end of the database" << endl; } else { // Display details for what temp points to at that stage cout << "Location : " << temp->nodeCityName << endl; cout << "Latitude : " << temp->nodeLati << endl; cout << "Longitude : " << temp->nodeLongi << endl; cout << endl; // Move on to next locationNode if one exists temp = temp->Next; } } while (temp != NULL); break; case 'O': { sortList(); // pass by reference??? cout << "List reordered alphabetically" << endl; } break; case 'A': char cityName[35]; double lati, longi; cout << endl << "Enter the name of the location: "; cin >> cityName; for(xx = 0; cityName[xx] != NULL; xx++) { if(cityName[xx] == 47) // if it is a '/' { cityName[xx] = 32; // replace it for a space } } if(!nodeExistTest(cityName)) { cout << endl << "Please enter the latitude value for this location: "; cin >> lati; cout << endl << "Please enter the longitude value for this location: "; cin >> longi; cout << endl; temp = new locationNode(cityName, lati, longi); temp->correctCase(); //start_ptr allignment if(start_ptr == NULL){ // if list is currently empty, start_ptr will point to this node start_ptr = temp; } else { temp2 = start_ptr; // We know this is not NULL - list not empty! while (temp2->Next != NULL) { temp2 = temp2->Next; // Move to next link in chain until reach end of list } temp2->Next = temp; } ++locationCount; // increment counter for number of records in list cout << "Location sucessfully added to the database! There are " << locationCount << " location(s) stored" << endl; } else { cout << "Node is already present in the list and so cannot be added again" << endl; } break; case 'D': { junction = 0; locationNode *place; cout << "Enter the name of the city you wish to remove" << endl; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); for(xx = 0; targetCity[xx] != NULL; xx++) { if(targetCity[xx] == 47) { targetCity[xx] = 32; } } if(nodeExistTest(targetCity)) //if this node does exist { if(seek == start_ptr) // if it is the first in the list { junction = 1; } if(seek->Next == NULL) // if it is last in the list { junction = 2; } switch(junction) // will alter list accordingly dependant on where the searched for link is { case 1: start_ptr = start_ptr->Next; delete seek; --locationCount; break; case 2: place = seek; seek = bridge; seek->Next = NULL; delete place; --locationCount; break; default: bridge->Next = seek->Next; delete seek; --locationCount; break; } cout << endl << "Link deleted. There are now " << locationCount << " locations." << endl; } else { cout << "That entry does not currently exist" << endl << endl << endl; } } break; case 'C': { char city1[35], city2[35]; cout << "Enter the first city name" << endl; cin >> city1; setElementsNull(city1); correctCase(targetCity); if(nodeExistTest(city1)) { lat1 = seek->nodeLati; lon1 = seek->nodeLongi; cout << "Lati = " << seek->nodeLati << endl << "Longi = " << seek->nodeLongi << endl << endl; } cout << "Enter the second city name" << endl; cin >> city2; setElementsNull(city2); correctCase(targetCity); if(nodeExistTest(city2)) { lat2 = seek->nodeLati; lon2 = seek->nodeLongi; cout << "Lati = " << seek->nodeLati << endl << "Longi = " << seek->nodeLongi << endl << endl; } result = distance (lat1, lon1, lat2, lon2, dist); cout << "The distance between these two locations is " << result << " kilometres." << endl; } break; case 'S': { char choice; cout << "Enter search term..." << endl; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); if(nodeExistTest(targetCity)) { cout << "Latitude: " << seek->nodeLati << endl << "Longitude: " << seek->nodeLongi << endl; } else { cout << "Sorry, that city is not currently present in the list." << endl << "Would you like to add this city now Y/N?" << endl; cin >> choice; /*while(choice != ('Y' || 'N')) { cout << "Please enter a valid choice..." << endl; cin >> choice; }*/ switch(choice) { case 'Y': skipKey = 'X'; menuChoice = 'A'; break; case 'N': break; default : cout << "Invalid choice" << endl; break; } } break; } case 'M': { cout << "Locations currently stored: " << locationCount << endl << "Memory used for this: " << (sizeof(start_ptr) * locationCount) << " bytes" << endl << endl << "You can store " << ((gig - (sizeof(start_ptr) * locationCount)) / sizeof(start_ptr)) << " more locations" << endl ; break; } case 'U': { cout << "Enter the name of the Location you would like to update: "; cin >> targetCity; setElementsNull(targetCity); correctCase(targetCity); if(nodeExistTest(targetCity)) { cout << "Select (1) to alter City Name, (2) to alter Longitude, (3) to alter Latitude" << endl; cin >> modKey; switch(modKey) { case 1: cout << "Enter the new name: "; cin >> alter; cout << endl; seek->modify(alter); break; case 2: cout << "Enter the new latitude: "; cin >> modVal; cout << endl; seek->modify(modVal, modKey); break; case 3: cout << "Enter the new longitude: "; cin >> modVal; cout << endl; seek->modify(modVal, modKey); break; default: break; } } else cout << "Location not found" << endl; break; } } } } }; int main(array<System::String ^> ^args) { Menu mm; //mm.initialise(); mm.menuInput(); mm.serialise(); }

    Read the article

  • Can't start httpd 2.4.9 with self-signed SSL certificate

    - by Smollet
    I cannot start the httpd 2.4.9 (tried 2.4.x too) on CentOS 6.5 with the simplest SSL config possible. The openssl version installed on the machine is OpenSSL 1.0.1e-fips 11 Feb 2013 (I've upgraded it using 'yum update' to the latest patched version as well) I have compiled and installed the httpd 2.4.9 using the following commands: ./configure --enable-ssl --with-ssl=/usr/local/ssl/ --enable-proxy=shared --enable-proxy_wstunnel=shared --with-apr=apr-1.5.1/ --with-apr-util=apr-util-1.5.3/ make make install Now I'm generating the default self-signed certificate as described in the CentOS HowTo: openssl genrsa -out ca.key 2048 openssl req -new -key ca.key -out ca.csr openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt cp ca.crt /etc/pki/tls/certs cp ca.key /etc/pki/tls/private/ca.key cp ca.csr /etc/pki/tls/private/ca.csr Here is my httpd-ssl.conf file: Listen 443 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLPassPhraseDialog builtin SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)" SSLSessionCacheTimeout 300 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "/usr/local/apache2/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch "MSIE [2-5]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog "/usr/local/apache2/logs/ssl_request_log" \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> when I start httpd using bin/apachectl -k start I get following errors in the error_log: Wed Jun 04 00:29:27.995654 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:29:27.995726 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.995863 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996111 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:29:27.996122 2014] [ssl:info] [pid 24021:tid 139640404293376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:29:27.996209 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.996280 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996295 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:29:27.996303 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: DH PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996308 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: EC PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996318 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:29:27.996321 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I then try to generate missing DH PARAMETERS and EC PARAMETERS: openssl dhparam -outform PEM -out dhparam.pem 2048 openssl ecparam -out ec_param.pem -name prime256v1 cat dhparam.pem ec_param.pem >> /etc/pki/tls/certs/ca.crt And it mitigates the error but the next comes out: [Wed Jun 04 00:34:05.021438 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:34:05.021487 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.021874 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022050 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:34:05.022066 2014] [ssl:info] [pid 24089:tid 140719371077376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:34:05.022285 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1016): AH02540: Custom DH parameters (2048 bits) for 192.168.9.128:443 loaded from /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022389 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1030): AH02541: ECDH curve prime256v1 for 192.168.9.128:443 specified in /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022397 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.022464 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022478 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:34:05.022488 2014] [ssl:emerg] [pid 24089:tid 140719371077376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:34:05.022491 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I have tried to generate the simple certificate/key pair exactly as described in the httpd docs Unfortunately, I still get exact same errors as above. I've seen a bug report with the similar issue: https://issues.apache.org/bugzilla/show_bug.cgi?id=56410 But the openssl version I have is reported as working there. I've also tried to apply the patch from the report as well as build the latest 2.4.x branch with no success, I get the same errors as above. I have also tried to create a short chain of certificates and set the root CA certificate using SSLCertificateChainFile directive. That didn't help either, I get exact same errors as above. I'm not interested in setting up hardened security, etc. The only thing I need is to start httpd with the simplest SSL config possible to continue testing proxy config for the mod_proxy_wstunnel Had anybody encountered and solved this issue? Is my sequence for creating a self-signed certificate incorrect? I'd appreciate any help very much!

    Read the article

  • Credentials can not be delegated - Alfresco Share

    - by leftcase
    I've hit a brick wall configuring Alfresco 4.0.d on Redhat 6. I'm using Kerberos authentication, it seems to be working normally, and single sign on is working on the main alfresco app itself. I've been through the configuration steps to get the share app working, but try as I may, I keep getting this error in catalina.out each time a browser accesses http://server:8080/share along with a 'Windows Security' password box. WARN [site.servlet.KerberosSessionSetupPrivilegedAction] credentials can not be delegated! Here's what I've done so far: Using AD users and computers, selected the alfrescohttp account, and selected 'trust this user for delegation to any service (Kerberos only). Copied /opt/alfresco-4.0.d/tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml.sample to share-config-custom.xml and edited like this: <config evaluator="string-compare" condition="Kerberos" replace="true"> <kerberos> <password>*****</password> <realm>MYDOMAIN.CO.UK</realm> <endpoint-spn>HTTP/[email protected]</endpoint-spn> <config-entry>ShareHTTP</config-entry> </kerberos> </config> <config evaluator="string-compare" condition="Remote"> <remote> <keystore> <path>alfresco/web-extension/alfresco-system.p12</path> <type>pkcs12</type> <password>alfresco-system</password> </keystore> <connector> <id>alfrescoCookie</id> <name>Alfresco Connector</name> <description>Connects to an Alfresco instance using cookie-based authentication</description> <class>org.springframework.extensions.webscripts.connector.AlfrescoConnector</class> </connector> <endpoint> <id>alfresco</id> <name>Alfresco - user access</name> <description>Access to Alfresco Repository WebScripts that require user authentication</description> <connector-id>alfrescoCookie</connector-id> <endpoint-url>http://localhost:8080/alfresco/wcs</endpoint-url> <identity>user</identity> <external-auth>true</external-auth> </endpoint> </remote> </config> Setup the /etc/krb5.conf file like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = MYDOMAIN.CO.UK default_tkt_enctypes = rc4-hmac default_tgs_enctypes = rc4-hmac forwardable = true proxiable = true [realms] MYDOMAIN.CO.UK = { kdc = mydc.mydomain.co.uk admin_server = mydc.mydomain.co.uk } [domain_realm] .mydc.mydomain.co.uk = MYDOMAIN.CO.UK mydc.mydomain.co.uk = MYDOMAIN.CO.UK /opt/alfresco-4.0.d/java/jre/lib/security/java.login.config is configured like this: Alfresco { com.sun.security.auth.module.Krb5LoginModule sufficient; }; AlfrescoCIFS { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescocifs.keytab" principal="cifs/server.mydomain.co.uk"; }; AlfrescoHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; com.sun.net.ssl.client { com.sun.security.auth.module.Krb5LoginModule sufficient; }; other { com.sun.security.auth.module.Krb5LoginModule sufficient; }; ShareHTTP { com.sun.security.auth.module.Krb5LoginModule required storeKey=true useKeyTab=true keyTab="/etc/alfrescohttp.keytab" principal="HTTP/server.mydomain.co.uk"; }; And finally, the following settings in alfresco-global.conf authentication.chain=kerberos1:kerberos,alfrescoNtlm1:alfrescoNtlm kerberos.authentication.real=MYDOMAIN.CO.UK kerberos.authentication.user.configEntryName=Alfresco kerberos.authentication.cifs.configEntryName=AlfrescoCIFS kerberos.authentication.http.configEntryName=AlfrescoHTTP kerberos.authentication.cifs.password=****** kerberos.authentication.http.password=***** kerberos.authentication.defaultAdministratorUserNames=administrator ntlm.authentication.sso.enabled=true As I say, I've hit a brick wall with this and I'd really appreciate any help you can give me! This question is also posted on the Alfresco forum, but I wondered if any folk here on serverfault have come across similar implementation challenges?

    Read the article

  • GRUB-2 Bootloader fails to load for lack of floppy drive. Ubuntu 10.4 & Windows XP

    - by kammer
    2010.07.21 while trying to install Ubuntu 10.4 Hello all, I've been trying to install Ubuntu 10.04 on my Dell workstation and am unable to get the Grub-2 bootloader to load properly. It seems to be failing for lack of a floppy drive on the system resulting in an error message that reads : error: fd0 cannot get C/H/S values. I've gone through the Grub-2 page at https://help.ubuntu.com/community/Grub2 to no avail and other sources having similar problems have likewise turned up no solutions. I would certainly appreciate any insight, here's the background: A while back I was trying to install a different version of Linux and had the same problems, then had to set the project aside for a bit. I don't think this has anything to do with Linux or Ubuntu per se, but rather Grub. The system is an old (4-5 years) Dell workstation that has one drive (128 GB) set up for Windows XP and a second new drive (500GB) which I installed for Linux. There is a DVD/CD drive and the system contains no floppy drive at all. In one attempt to get this working I tried modifying the BIOS to indicate there was a floppy drive - this created a failure earlier in the chain with the BIOS failing to load properly, not unexpected, just a shot in the dark at that point. At the moment I am considering just running out to buy and install a cheap floppy drive to see if that helps. I'll never use the thing though so I'd rather find a solution that doesn't require me to spend money on useless hardware. In any case, here's the /boot/grub/grub.cfg contents: # # DO NOT EDIT THIS FILE # # It is automatically generated by /usr/sbin/grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=10 fi insmod play play 480 440 1 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=fbebde47-f488-41b0-9480-337802ecb988 ro quiet splash initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=fbebde47-f488-41b0-9480-337802ecb988 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod ext2 set root='(hd1,1)' search --no-floppy --fs-uuid --set fbebde47-f488-41b0-9480-337802ecb988 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Microsoft Windows XP Home Edition (on /dev/sda1)" { insmod ntfs set root='(hd0,1)' search --no-floppy --fs-uuid --set 6ef0d4b4f0d4842d drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### Thoughts anyone? Thanks in advance.

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • WiX 3 Tutorial: Custom EULA License and MSI localization

    - by Mladen Prajdic
    In this part of the ongoing Wix tutorial series we’ll take a look at how to localize your MSI into different languages. We’re still the mighty SuperForm: Program that takes care of all your label color needs. :) Localizing the MSI With WiX 3.0 localizing an MSI is pretty much a simple and straightforward process. First let look at the WiX project Properties->Build. There you can see "Cultures to build" textbox. Put specific cultures to build into the testbox or leave it empty to build all of them. Cultures have to be in correct culture format like en-US, en-GB or de-DE. Next we have to tell WiX which cultures we actually have in our project. Take a look at the first post in the series about Solution/Project structure and look at the Lang directory in the project structure picture. There we have de-de and en-us subfolders each with its own localized stuff. In the subfolders pay attention to the WXL files Loc_de-de.wxl and Loc_en-us.wxl. Each one has a <String Id="LANG"> under the WixLocalization root node. By including the string with id LANG we tell WiX we want that culture built. For English we have <String Id="LANG">1033</String>, for German <String Id="LANG">1031</String> in Loc_de-de.wxl and for French we’d have to create another file Loc_fr-FR.wxl and put <String Id="LANG">1036</String>. WXL files are localization files. Any string we want to localize we have to put in there. To reference it we use loc keyword like this: !(loc.IdOfTheVariable) => !(loc.MustCloseSuperForm) This is our Loc_en-us.wxl. Note that German wxl has an identical structure but values are in German. <?xml version="1.0" encoding="utf-8"?><WixLocalization Culture="en-us" xmlns="http://schemas.microsoft.com/wix/2006/localization" Codepage="1252"> <String Id="LANG">1033</String> <String Id="ProductName">SuperForm</String> <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String> <String Id="ManufacturerName">My Company Name</String> <String Id="AppNotSupported">This application is is not supported on your current OS. Minimal OS supported is Windows XP SP2</String> <String Id="DotNetFrameworkNeeded">.NET Framework 3.5 is required. Please install the .NET Framework then run this installer again.</String> <String Id="MustCloseSuperForm">Must close SuperForm!</String> <String Id="SuperFormNewerVersionInstalled">A newer version of !(loc.ProductName) is already installed.</String> <String Id="ProductKeyCheckDialog_Title">!(loc.ProductName) setup</String> <String Id="ProductKeyCheckDialogControls_Title">!(loc.ProductName) Product check</String> <String Id="ProductKeyCheckDialogControls_Description">Plese Enter following information to perform the licence check.</String> <String Id="ProductKeyCheckDialogControls_FullName">Full Name:</String> <String Id="ProductKeyCheckDialogControls_Organization">Organization:</String> <String Id="ProductKeyCheckDialogControls_ProductKey">Product Key:</String> <String Id="ProductKeyCheckDialogControls_InvalidProductKey">The product key you entered is invalid. Please call user support.</String> </WixLocalization>   As you can see from the file we can use localization variables in other variables like we do for SuperFormNewerVersionInstalled string. ProductKeyCheckDialog* strings are to localize a custom dialog for Product key check which we’ll look at in the next post. Built in dialog text localization Under the de-de folder there’s also the WixUI_de-de.wxl file. This files contains German translations of all texts that are in WiX built in dialogs. It can be downloaded from WiX 3.0.5419.0 Source Forge site. Download the wix3-sources.zip and go to \src\ext\UIExtension\wixlib. There you’ll find already translated all WiX texts in 12 Languages. Localizing the custom EULA license Here it gets ugly. We can override the default EULA license easily by overriding WixUILicenseRtf WiX variable like this: <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> where License.rtf is the name of your custom EULA license file. The downside of this method is that you can only have one license file which means no localization for it. That’s why we need to make a workaround. License is checked on a dialog name LicenseAgreementDialog. What we have to do is overwrite that dialog and insert the functionality for localization. This is a code for LicenseAgreementDialogOverwritten.wxs, an overwritten LicenseAgreementDialog that supports localization. LicenseAcceptedOverwritten replaces the LicenseAccepted built in variable. <?xml version="1.0" encoding="UTF-8" ?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <UI> <Dialog Id="LicenseAgreementDialogOverwritten" Width="370" Height="270" Title="!(loc.LicenseAgreementDlg_Title)"> <Control Id="LicenseAcceptedOverwrittenCheckBox" Type="CheckBox" X="20" Y="207" Width="330" Height="18" CheckBoxValue="1" Property="LicenseAcceptedOverwritten" Text="!(loc.LicenseAgreementDlgLicenseAcceptedCheckBox)" /> <Control Id="Back" Type="PushButton" X="180" Y="243" Width="56" Height="17" Text="!(loc.WixUIBack)" /> <Control Id="Next" Type="PushButton" X="236" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)"> <Publish Event="SpawnWaitDialog" Value="WaitForCostingDlg">CostingComplete = 1</Publish> <Condition Action="disable"> <![CDATA[ LicenseAcceptedOverwritten <> "1" ]]> </Condition> <Condition Action="enable">LicenseAcceptedOverwritten = "1"</Condition> </Control> <Control Id="Cancel" Type="PushButton" X="304" Y="243" Width="56" Height="17" Cancel="yes" Text="!(loc.WixUICancel)"> <Publish Event="SpawnDialog" Value="CancelDlg">1</Publish> </Control> <Control Id="BannerBitmap" Type="Bitmap" X="0" Y="0" Width="370" Height="44" TabSkip="no" Text="!(loc.LicenseAgreementDlgBannerBitmap)" /> <Control Id="LicenseText" Type="ScrollableText" X="20" Y="60" Width="330" Height="140" Sunken="yes" TabSkip="no"> <!-- This is original line --> <!--<Text SourceFile="!(wix.WixUILicenseRtf=$(var.LicenseRtf))" />--> <!-- To enable EULA localization we change it to this --> <Text SourceFile="$(var.ProjectDir)\!(loc.LicenseRtf)" /> <!-- In each of localization files (wxl) put line like this: <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String>--> </Control> <Control Id="Print" Type="PushButton" X="112" Y="243" Width="56" Height="17" Text="!(loc.WixUIPrint)"> <Publish Event="DoAction" Value="WixUIPrintEula">1</Publish> </Control> <Control Id="BannerLine" Type="Line" X="0" Y="44" Width="370" Height="0" /> <Control Id="BottomLine" Type="Line" X="0" Y="234" Width="370" Height="0" /> <Control Id="Description" Type="Text" X="25" Y="23" Width="340" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgDescription)" /> <Control Id="Title" Type="Text" X="15" Y="6" Width="200" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgTitle)" /> </Dialog> </UI> </Fragment></Wix>   Look at the Control with Id "LicenseText” and read the comments. We’ve changed the original license text source to "$(var.ProjectDir)\!(loc.LicenseRtf)". var.ProjectDir is the directory of the project file. The !(loc.LicenseRtf) is where the magic happens. Scroll up and take a look at the wxl localization file example. We have the LicenseRtf declared there and it’s been made overridable so developers can change it if they want. The value of the LicenseRtf is the path to our localized EULA relative to the WiX project directory. With little hacking we’ve achieved a fully localizable installer package.   The final step is to insert the extended LicenseAgreementDialogOverwritten license dialog into the installer GUI chain. This is how it’s done under the <UI> node of course.   <UI> <!-- code to be discussed in later posts –> <!-- BEGIN UI LOGIC FOR CLEAN INSTALLER --> <Publish Dialog="WelcomeDlg" Control="Next" Event="NewDialog" Value="LicenseAgreementDialogOverwritten">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Back" Event="NewDialog" Value="WelcomeDlg">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Next" Event="NewDialog" Value="ProductKeyCheckDialog">LicenseAcceptedOverwritten = "1" AND NOT OLDER_VERSION_FOUND</Publish> <Publish Dialog="InstallDirDlg" Control="Back" Event="NewDialog" Value="ProductKeyCheckDialog">1</Publish> <!-- END UI LOGIC FOR CLEAN INSTALLER –> <!-- code to be discussed in later posts --></UI> For a thing that should be simple for the end developer to do, localization can be a bit advanced for the novice WiXer. Hope this post makes the journey easier and that next versions of WiX improve this process. WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe  WiX 3 Tutorial: Custom EULA License and MSI localization WiX 3 Tutorial: Product Key Check custom action WiX 3 Tutorial: Building an updater WiX 3 Tutorial: Icons and installer pictures WiX 3 Tutorial: Creating a Bootstrapper

    Read the article

  • CodePlex Daily Summary for Wednesday, February 24, 2010

    CodePlex Daily Summary for Wednesday, February 24, 2010New ProjectsADO.Net DataSets to ExtJs.data.Store: A JavaScript (and C#) based project to reduce the amount of client-side code necessary to consume ADO.Net / ASP.Net web services when using ExtJS.AMP.Net Wrapper: AMP is a platform to build on-line marketplaces (http://www.poweredbyamp.com). AMP.Net provided Object-Like interaction with AMP's restful service...ArkSwitch: ArkSwitch is an easy to use, finger-friendly task manager for Windows Mobile 6.5.3 (with a WM6.5 compatibility mode). It is developed mainly in C#,...Biffen: Cinema-booking project in Computer Science at University College Nordjylland, Denmark.Braintree Client Library: Client library for integrating with the Braintree Gateway.Business Framework: A framework which helps building business applications. It provides business rules, validation rules and a text-based language for writing rules. I...Camp Araminta: This project will be used to coordinate development efforts on the Camp Araminta website.ChoServiceHost: Simple and easy way to create and host Windows Service Applications in .NET 3.5/Visual Studio 2008Delta College Game Development Project: Project site for cs 16 game development classDotNetNuke® Labs: DotNetNuke Labs is a collection of "research & development" type projects for the DotNetNuke platform.Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): This is a generic web part for hosting Silverlight content on WSS 30 and MOSS 2007 sites. The objective of this web part was to make it easy for us...GpTiming: GpTiming is a simple "lab" application related to race events, based on a Domain Model.HTML Forms in Windows Forms: As the names suggests this code library is designed to introduce HTML code (primarily form code) into Windows Forms. It was created because standar...imgur uploader - .net open source uploader for image sharing site imgur: Imgur uploader strives to be an easy to use uploader for images you would like to share with friends and family. It is written in c#.kuuy static system: kuuy static system is a full static publish website system!LaTeX Grapher: The goal of this project is to make a tool that facilitates making high quality two dimensional vector graphic function plots with a minimal amount...LightREST: A .NET library to consume REST-based HTTP services.Machiavelli: Machiavelli is Stackoverflow inspired project that I am working on following Andrew Siemer's article on DotNetSlackers. Mover: Mover makes it easier for developers to create programmatic animations in Silverlight. It provides an expressive API to the platform's underlying S...MVC Presenter: ASP.NET MVC 2で作るプレゼンビューアーnHibernate Attribute mapping: How to use Attibute mapping with a ManyToMany Relationship with nHibernateNIPO Data Processing Component Framework: NIPO is a general purpose component framework for data processing applications (that follow the IPO-principle). Its plugin-based architecture makes...PowerShell Remote File Explorer: This project intends to develop a Windows forms based file explorer to browse/transfer files over PowerShell 2.0 remoting channel. The file transfe...Process Flow Tracking of Biomass Distribution Project (University of Mumbai): At Larsen & Toubro Infotech India Ltd., my team worked on a SCM (Supply Chain Management) based project titled 'Process Flow Tracking of Biomass Di...VS2010 Rc1 Fix: Illustrates a fix for working with the ASAP.NET Wizard control with VS2010 RC1Yicker: a microblog program devolep by c#.New ReleasesADO.Net DataSets to ExtJs.data.Store: Ext.net: This is the first version of Ext.net. This version contains a single class, Ext.net.Store which extends the Ext.data.Store class to consume ADO.Ne...AMP.Net Wrapper: AMP.Net v1.0: Provides abstraction for all the product search functionality offered by AMP.ArkSwitch: ArkSwitch legacy versions: Old versions - no need to download themArkSwitch: ArkSwitch v1.1.0: ArkSwitch v1.1.0Braintree Client Library: Braintree 1.0.0: Braintree .NET client library 1.0.0Business Framework: BusinessFramework preview: Early preview bits. See Rules for a sample.Business Framework: Samples: SamplesCC.Votd: CC.Votd 1.0.10.224: This is the initial release of CC.Votd. Marking as beta since I'm the only one who has used it up to this point.ChoServiceHost: ChoServiceHost.msi: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Installer)ChoServiceHost: ChoServiceHost-Src.zip: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Source Files)CHS Extranet: Beta 2.4: Beta 2.4 Release: Change Log: Added HTML preview options for XLS, XLSX, DOCX File Changes: ~/MyComputer.aspx ~/mycomputer.css ~/basestyle.css...Composure: AvalonDock-55751-VS2010.NET4: This is a "convenience build" of AvalonDock (drop 55751) for VIsual Studio 2010 and .NET 4.0. Nothing has been altered in the source code (which ...Data Access Component: Version 2.6: Add LINQ support.Desktop Google Reader: 1.3 Beta 1: New features: Read it Later included (see http://readitlaterlist.com/) Liking added (working: see number of liking users, see if liking yourself,...Explorer Plus: Explorer Plus v0.3: Amazon Locales AddedFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 Released: Hi, Today we have released the final version of Visifire v3.0.3 which contains the following major features: * DataBinding. * IndicatorEn...Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): CTP: The objective of this release was to gather feedback from the wider community. I intend to pursue further development and make fixes wherever appro...HTML Forms in Windows Forms: HTMLForms 1.0: First Release.imgur uploader - .net open source uploader for image sharing site imgur: Release 2010-02-23-01: This is the first codeplex release! Let mayhem commence...Jeremi Stadler: Stick Tops 2.5: Sticktops is a very light program that makes it easy to paste stuff on small notes on the screen. All notes you have is saved on a server so you ca...kuuy static system: kss_v1.0beta sql: kss_v1.0beta sql scripts sourceMDownloader: MDownloader-0.15.2.55998: Fixed detecting uploading.com dead links; Added hiding rss entries without files;Mover: MoverLib for Silverlight 3: A first version of MoverLib for Silverlight 3.nHibernate Attribute mapping: 1.0: Source CodenHibernate Attribute mapping: Download 1: Zip fileNodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.113: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.113: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version inclu...OAuthLib: OAuthLib (1.6.0.0): Difference between previous version is as next. 7079 Make it possible to pass factory method of request in ObtainUnauthorizedRequestToken and Reque...patterns & practices SharePoint Guidance: SPG2010 Drop 5: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...PowerShell Remote File Explorer: PSRemoteExplorer 0.1: This release is the initial release of PowerShell remote file explorer. This enables the basic functionality of a remote file explorer. This also p...Reusable Library: v1.0.3: A collection of reusable abstractions for enterprise application developer.SharePoint Outlook Connector: Version 1.0.2.4: Version 1.0.2.4 Minor bugs have been fixed.Silverlight Server File Manager: First production release: This release is in production. Release on change set 37268.SIMD Detector: 2nd Release: Released C/CLI assembly project for use in CSharp and VB. Tested in CSharp console application. A Windows Form application coming soon. Projects ma...Source Analysis Policy: Source Analysis Policy v1.1 SP1: This release contains the compiled, and signed binaries in an installation package. This package also registers the policy with Microsoft Visual St...SpecExpress : A Fluent Validation Framework: SpecExpress 1.1: UpdatesAdded Validation Contexts feature Fixed bug with handling for Bool Types and Required MessageStore now allows for overriding individual ...VCC: Latest build, v2.1.30223.0: Automatic drop of latest buildVS2010 Rc1 Fix: RC1Fix01: This is a very simple project implementing a Microsoft Walkthrough at http://msdn.microsoft.com/en-us/library/wdb4eb30%28VS.100%29.aspx and the man...WPF AutoComplete TextBox Control: version 1.0: Initial releaseMost Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkAccelerators for Microsoft Dynamics CRMWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETInfoServiceNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRapid Entity Framework. (ORM). CTP 2SharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryXcoordination Application Space

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Exam 70-480 Study Material: Programming in HTML5 with JavaScript and CSS3

    - by Stacy Vicknair
    Here’s a list of sources of information for the different elements that comprise the 70-480 exam: General Resources http://www.w3schools.com (As pointed out in David Pallmann’s blog some of this content is unverified, but it is a decent source of information. For more about when it isn’t decent, see http://www.w3fools.com ) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/ (A guy who did a lot of what I did already, sadly I found this halfway through finishing my resources list. This list is expertly put together so I would recommend checking it out.) http://davidpallmann.blogspot.com/2012/08/microsoft-certification-exam-70-480.html http://pluralsight.com/training/Courses (Yes, this isn’t free, but if you look at the course listing there is an entire section on HTML5, CSS3 and Javascript. You can always try the trial!)   Some of the links I put below will overlap with the other resources above, but I tried to find explanations that looked beneficial to me on links outside those already mentioned.   Test Breakdown Implement and Manipulate Document Structures and Objects (24%) Create the document structure. o This objective may include but is not limited to: structure the UI by using semantic markup, including for search engines and screen readers (Section, Article, Nav, Header, Footer, and Aside); create a layout container in HTML http://www.w3schools.com/html/html5_new_elements.asp   Write code that interacts with UI controls. o This objective may include but is not limited to: programmatically add and modify HTML elements; implement media controls; implement HTML5 canvas and SVG graphics http://www.w3schools.com/html/html5_canvas.asp http://www.w3schools.com/html/html5_svg.asp   Apply styling to HTML elements programmatically. o This objective may include but is not limited to: change the location of an element; apply a transform; show and hide elements   Implement HTML5 APIs. o This objective may include but is not limited to: implement storage APIs, AppCache API, and Geolocation API http://www.w3schools.com/html/html5_geolocation.asp http://www.w3schools.com/html/html5_webstorage.asp http://www.w3schools.com/html/html5_app_cache.asp   Establish the scope of objects and variables. o This objective may include but is not limited to: define the lifetime of variables; keep objects out of the global namespace; use the “this” keyword to reference an object that fired an event; scope variables locally and globally http://robertnyman.com/2008/10/09/explaining-javascript-scope-and-closures/ http://www.quirksmode.org/js/this.html   Create and implement objects and methods. o This objective may include but is not limited to: implement native objects; create custom objects and custom properties for native objects using prototypes and functions; inherit from an object; implement native methods and create custom methods http://www.javascriptkit.com/javatutors/object.shtml http://www.crockford.com/javascript/inheritance.html http://stackoverflow.com/questions/1635116/javascript-class-method-vs-class-prototype-method http://www.javascriptkit.com/javatutors/proto.shtml     Implement Program Flow (25%) Implement program flow. o This objective may include but is not limited to: iterate across collections and array items; manage program decisions by using switch statements, if/then, and operators; evaluate expressions http://www.javascriptkit.com/jsref/looping.shtml http://www.javascriptkit.com/javatutors/varshort.shtml http://www.javascriptkit.com/javatutors/switch.shtml   Raise and handle an event. o This objective may include but is not limited to: handle common events exposed by DOM (OnBlur, OnFocus, OnClick); declare and handle bubbled events; handle an event by using an anonymous function http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html http://javascript.info/tutorial/bubbling-and-capturing   Implement exception handling. o This objective may include but is not limited to: set and respond to error codes; throw an exception; request for null checks; implement try-catch-finally blocks http://www.javascriptkit.com/javatutors/trycatch.shtml   Implement a callback. o This objective may include but is not limited to: receive messages from the HTML5 WebSocket API; use jQuery to make an AJAX call; wire up an event; implement a callback by using anonymous functions; handle the “this” pointer http://www.w3.org/TR/2011/WD-websockets-20110419/ http://www.html5rocks.com/en/tutorials/websockets/basics/ http://api.jquery.com/jQuery.ajax/   Create a web worker process. o This objective may include but is not limited to: start and stop a web worker; pass data to a web worker; configure timeouts and intervals on the web worker; register an event listener for the web worker; limitations of a web worker https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers http://www.html5rocks.com/en/tutorials/workers/basics/   Access and Secure Data (26%) Validate user input by using HTML5 elements. o This objective may include but is not limited to: choose the appropriate controls based on requirements; implement HTML input types and content attributes (for example, required) to collect user input http://diveintohtml5.info/forms.html   Validate user input by using JavaScript. o This objective may include but is not limited to: evaluate a regular expression to validate the input format; validate that you are getting the right kind of data type by using built-in functions; prevent code injection http://www.regular-expressions.info/javascript.html http://msdn.microsoft.com/en-us/library/66ztdbe6(v=vs.94).aspx https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Operators/typeof http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://stackoverflow.com/questions/942011/how-to-prevent-javascript-injection-attacks-within-user-generated-html   Consume data. o This objective may include but is not limited to: consume JSON and XML data; retrieve data by using web services; load data or get data from other sources by using XMLHTTPRequest http://www.erichynds.com/jquery/working-with-xml-jquery-and-javascript/ http://www.webdevstuff.com/86/javascript-xmlhttprequest-object.html http://www.json.org/ http://stackoverflow.com/questions/4935632/how-to-parse-json-in-javascript   Serialize, deserialize, and transmit data. o This objective may include but is not limited to: binary data; text data (JSON, XML); implement the jQuery serialize method; Form.Submit; parse data; send data by using XMLHTTPRequest; sanitize input by using URI/form encoding http://api.jquery.com/serialize/ http://www.javascript-coder.com/javascript-form/javascript-form-submit.phtml http://stackoverflow.com/questions/327685/is-there-a-way-to-read-binary-data-into-javascript https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/encodeURI     Use CSS3 in Applications (25%) Style HTML text properties. o This objective may include but is not limited to: apply styles to text appearance (color, bold, italics); apply styles to text font (WOFF and @font-face, size); apply styles to text alignment, spacing, and indentation; apply styles to text hyphenation; apply styles for a text drop shadow http://www.w3schools.com/css/css_text.asp http://www.w3schools.com/css/css_font.asp http://nicewebtype.com/notes/2009/10/30/how-to-use-css-font-face/ http://webdesign.about.com/od/beginningcss/p/aacss5text.htm http://www.w3.org/TR/css3-text/ http://www.css3.info/preview/box-shadow/   Style HTML box properties. o This objective may include but is not limited to: apply styles to alter appearance attributes (size, border and rounding border corners, outline, padding, margin); apply styles to alter graphic effects (transparency, opacity, background image, gradients, shadow, clipping); apply styles to establish and change an element’s position (static, relative, absolute, fixed) http://net.tutsplus.com/tutorials/html-css-techniques/10-css3-properties-you-need-to-be-familiar-with/ http://www.w3schools.com/css/css_image_transparency.asp http://www.w3schools.com/cssref/pr_background-image.asp http://ie.microsoft.com/testdrive/graphics/cssgradientbackgroundmaker/default.html http://www.w3.org/TR/CSS21/visufx.html http://www.barelyfitz.com/screencast/html-training/css/positioning/ http://davidwalsh.name/css-fixed-position   Create a flexible content layout. o This objective may include but is not limited to: implement a layout using a flexible box model; implement a layout using multi-column; implement a layout using position floating and exclusions; implement a layout using grid alignment; implement a layout using regions, grouping, and nesting http://www.html5rocks.com/en/tutorials/flexbox/quick/ http://www.css3.info/preview/multi-column-layout/ http://msdn.microsoft.com/en-us/library/ie/hh673558(v=vs.85).aspx http://dev.w3.org/csswg/css3-grid-layout/ http://dev.w3.org/csswg/css3-regions/   Create an animated and adaptive UI. o This objective may include but is not limited to: animate objects by applying CSS transitions; apply 3-D and 2-D transformations; adjust UI based on media queries (device adaptations for output formats, displays, and representations); hide or disable controls http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Find elements by using CSS selectors and jQuery. o This objective may include but is not limited to: choose the correct selector to reference an element; define element, style, and attribute selectors; find elements by using pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Structure a CSS file by using CSS selectors. o This objective may include but is not limited to: reference elements correctly; implement inheritance; override inheritance by using !important; style an element based on pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Technorati Tags: 70-480,CSS3,HTML5,HTML,CSS,JavaScript,Certification

    Read the article

  • Using Razor together with ASP.NET Web API

    - by Fredrik N
    On the blog post “If Then, If Then, If Then, MVC” I found the following code example: [HttpGet]public ActionResult List() { var list = new[] { "John", "Pete", "Ben" }; if (Request.AcceptTypes.Contains("application/json")) { return Json(list, JsonRequestBehavior.AllowGet); } if (Request.IsAjaxRequest()) [ return PartialView("_List", list); } return View(list); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is a ASP.NET MVC Controller where it reuse the same “business” code but returns JSON if the request require JSON, a partial view when the request is an AJAX request or a normal ASP.NET MVC View. The above code may have several reasons to be changed, and also do several things, the code is not closed for modifications. To extend the code with a new way of presenting the model, the code need to be modified. So I started to think about how the above code could be rewritten so it will follow the Single Responsibility and open-close principle. I came up with the following result and with the use of ASP.NET Web API: public String[] Get() { return new[] { "John", "Pete", "Ben" }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   It just returns the model, nothing more. The code will do one thing and it will do it well. But it will not solve the problem when it comes to return Views. If we use the ASP.NET Web Api we can get the result as JSON or XML, but not as a partial view or as a ASP.NET MVC view. Wouldn’t it be nice if we could do the following against the Get() method?   Accept: application/json JSON will be returned – Already part of the Web API   Accept: text/html Returns the model as HTML by using a View   The best thing, it’s possible!   By using the RazorEngine I created a custom MediaTypeFormatter (RazorFormatter, code at the end of this blog post) and associate it with the media type “text/html”. I decided to use convention before configuration to decide which Razor view should be used to render the model. To register the formatter I added the following code to Global.asax: GlobalConfiguration.Configuration.Formatters.Add(new RazorFormatter()); Here is an example of a ApiController that just simply returns a model: using System.Web.Http; namespace WebApiRazor.Controllers { public class CustomersController : ApiController { // GET api/values public Customer Get() { return new Customer { Name = "John Doe", Country = "Sweden" }; } } public class Customer { public string Name { get; set; } public string Country { get; set; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Because I decided to use convention before configuration I only need to add a view with the same name as the model, Customer.cshtml, here is the example of the View:   <!DOCTYPE html> <html> <head> <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.min.js" type="text/javascript"></script> </head> <body> <div id="body"> <section> <div> <hgroup> <h1>Welcome '@Model.Name' to ASP.NET Web API Razor Formatter!</h1> </hgroup> </div> <p> Using the same URL "api/values" but using AJAX: <button>Press to show content!</button> </p> <p> </p> </section> </div> </body> <script type="text/javascript"> $("button").click(function () { $.ajax({ url: '/api/values', type: "GET", contentType: "application/json; charset=utf-8", success: function(data, status, xhr) { alert(data.Name); }, error: function(xhr, status, error) { alert(error); }}); }); </script> </html> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now when I open up a browser and enter the following URL: http://localhost/api/customers the above View will be displayed and it will render the model the ApiController returns. If I use Ajax against the same ApiController with the content type set to “json”, the ApiController will now return the model as JSON. Here is a part of a really early prototype of the Razor formatter (The code is far from perfect, just use it for testing). I will rewrite the code and also make it possible to specify an attribute to the returned model, so it can decide which view to be used when the media type is “text/html”, but by default the formatter will use convention: using System; using System.Net.Http.Formatting; namespace WebApiRazor.Models { using System.IO; using System.Net; using System.Net.Http.Headers; using System.Reflection; using System.Threading.Tasks; using RazorEngine; public class RazorFormatter : MediaTypeFormatter { public RazorFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/html")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/xhtml+xml")); } //... public override Task WriteToStreamAsync( Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { var task = Task.Factory.StartNew(() => { var viewPath = // Get path to the view by the name of the type var template = File.ReadAllText(viewPath); Razor.Compile(template, type, type.Name); var razor = Razor.Run(type.Name, value); var buf = System.Text.Encoding.Default.GetBytes(razor); stream.Write(buf, 0, buf.Length); stream.Flush(); }); return task; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Summary By using formatters and the ASP.NET Web API we can easily just extend our code without doing any changes to our ApiControllers when we want to return a new format. This blog post just showed how we can extend the Web API to use Razor to format a returned model into HTML.   If you want to know when I will post more blog posts, please feel free to follow me on twitter:   @fredrikn

    Read the article

  • CodePlex Daily Summary for Tuesday, December 14, 2010

    CodePlex Daily Summary for Tuesday, December 14, 2010Popular ReleasesFlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010SuperWebSocket: SuperWebSocket Drop 2: Changes: based on SuperSocket 1.3 supported sub protocol supported SSL/TLS encryption (wss) in Sync socket mode fixed some data communication bugsSSH.NET Library: 2010.12.13: Fixes SFTP issue when you try to uploaded or download multiple files simultaneously. Usage example can be found hereRequest Tracker Data Access: 1.0.0.0: First releaseSQL Monitor: SQL Monitor 2.4: 1. auto adjust datagrids in query 2. disable activities related commands until activities tab is active.SuperSocket, an extensible socket application framework: SuperSocket 1.3 beta 1: SuperSocket 1.3 is built on .NET 4.0 framework. Bug fixes: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration value Third-part library upgrades: upgraded SuperSocket to .NET 4.0 upgraded EntLib 4.1 to 5.0 New features: supported UDP socket support custom protocol (can support binary protocol and other complecate...Wii Backup Fusion: Wii Backup Fusion 0.9 Beta: - Aqua or brushed metal style for Mac OS X - Shows selection count beside ID - Game list selection mode via settings - Compare Files <-> WBFS game lists - Verify game images/DVD/WBFS - WIT command line for log (via settings) - Cancel possibility for loading games process - Progress infos while loading games - Localization for dates - UTF-8 support - Shortcuts added - View game infos in browser - Transfer infos for log - All transfer routines rewritten - Extract image from image/WBFS - Support....NETTER Code Starter Pack: v1.0.beta: '.NETTER Code Starter Pack ' contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies and frameworks based on Microsoft .NET Framework. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database (for database driven scenarios). The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic...NuGet (formerly NuPack): NuGet 1.0 Release Candidate: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (http://go.microsoft.com/fwlink/?LinkID=206669) and package format. See http://nupack.codeplex.com/documentation?title=Nuspe...Free Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.SwapWin: SwapWin 0.2: Updates: Bring all windows that are swapped to foreground. Make the window sent to primary screen active.??????????: All-In-One Code Framework ??? 2010-12-10: ?????All-In-One Code Framework(??) 2010?12??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, WinForm, Silverlight????12?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2010/12/13/6072675.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????UOB & ME: UOB_ME 2.5: latest versionAutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.New ProjectsAugmented Reality system in Soccer video: Augmented reality system and camera calibration system in soccer videos based on homography and vanishing points. Code generated with Visual C++ (best compiler is .net)Database Schema Provider: Database Schema Provider gets a database schema in unified format independent on the type of database. It uses ADO.NET data provider for Entity Framework. Dicke Bertha: Many many cool features... DNN Bookmark: DNN Bookmarks is a DNN module that aggregates the most popular social bookmarking tools and also allows you to bookmark your DNN web siteDough: Dough is a UI starter kit built using ASP.Net MVC and ExtJS. It's name comes from the concept of Amish friendship bread, a type of bread or cake made from a sourdough starter that is often shared in a manner similar to a chain letter.Garra - Gerenciador Financeiro: Garra é um sistema completo de controle financeiro: contas a pagar, contas a receber, investimentos, etc...ghcwp7: ghcwp7Hackathon - DotNetNuke Razor User Locator: The DotNetNuke Razor User Locator module demonstrates how Razor can be used to author DNN modules. This module shows where recent users to a web site came from based on their IP address. Hackathon. DotNetNuke Razor. Flickr Badge: Flickr badge desktop module allows you to display image thumbnails from Flickr and preview them inside DotNetNuke or on Flickr (controlled by module settings). Image thumbnails can be loaded by tag, user id, user group id, user set id and more.Hackathon: Razor Youtube Gallery: This is a DotNetNuke module which allows a website admin to add several relevant Youtube videos to a pane. The end user watches the selected Youtube video play, while scrolling through thumbnails of other videos to play those without refreshing the page. jQuery UI MVC3 Demo: Demo and possibly a skeleton for using jQuery UI in MVC3 (currently RC2).Microsoft Office Communicator History manager: Needs to save conversation history just on the local workstation (folder) then application could redstore it and show in simple window (mode) or user could open folder and lock on it manuallyMSTest Extensions - Msbuild: This project contains various msbuild tasks that extend helps with test execution using Mcrosoft testing frameworkRayCharlesTracer: this is a scholar project for a raytracer.Refunctor: F# interactive inside Reflector.Request Tracker Data Access: Best Practical RT (Request Tracker) data access .NET library for REST interface.SurfzApp: An application that does data mining on web resources of interest for Swedish windsurfers...Tesseract Solutions Corp. Data Access Base: Tesseract Data Access speeds up data access in .Net projects. Developed in C# .Net 4. It is a C#, class based ORM.TimBazinga EVoting: Undergrad project - designing an e-voting software system.Tiny Library CQRS: Tiny Library CQRS is a small demonstration project which demonstrates the concept of Domain Driven Design and the CQRS architecture pattern. This project relies on the Apworks DDD framework.Toptoys: toptoysWebGroup: WebGroup makes it easier for your website members to comunicate online. It work like Web IM + Forum + Twitter. It can be easily used in your current project. Developed in C#.WpfCustomChromeLibrary: WpfCustomChromeLibrary makes it easier to create WPF applications with custom chrome and caption buttons (min/max/close). You'll no longer have to do all the dirty work yourself in each application where you want a custom chrome. It's developed in XAML and C#.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117  | Next Page >