Search Results

Search found 2966 results on 119 pages for 'procedural generation'.

Page 44/119 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • See the exciting new features available for iProcurement and Sourcing with 12.1.3 Rollup Patch 14254641:R12.PRC_PF.B!

    - by user793044
    See the exciting new features available for iProcurement and Sourcing with 12.1.3 Rollup Patch 14254641:R12.PRC_PF.B! Functional Area New Feature Note Reference Sourcing Suppliers can now accept Terms and Conditions to comply with the buyer's Non-Disclosure Agreements (NDA). The PDF generation process has been enhanced to provide faster generation of negotiation PDFs containing large amounts of data. Note 1499944.1 Sourcing New features From Procurement RUP Family R12.1.3 September Update 2012: Accept Terms and Conditions to Comply With NDA iProcurement Users can now do the following: Requesters can specify the GL date (encumbrance date) for each distribution against a line at the time of creating requisitions.  Enter an Accounting Date on and Procurement Requisition, if Dual Budgetary Control is enabled for Purchasing. Choose a Favorite Charge Account to override your default charge account, using the Preferences page.  Buyers can update the unit price, suggested supplier, and site details while requesting a catalog item (inventory item) that is not linked to a blanket purchase agreement. Note 1499911.1 iProcurement New Features From RUP Family R12.1.3 September Update 2012: GL/Accouting Date,PO_CUSTOM_FUNDS_PKG.plb,Price and Supplier Update For new features across all the Procurement product groups and information about applying Patch 14254641 see Note 1468883.1.

    Read the article

  • Multiple vulnerabilities in Firefox

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-3982 Denial of service (DoS) vulnerability 10.0 Firefox Solaris 10 SPARC: 145080-13 X86: 145081-12 CVE-2012-3983 Denial of service (DoS) vulnerability 10.0 CVE-2012-3986 Permissions, Privileges, and Access Controls vulnerability 6.4 CVE-2012-3988 Resource Management Errors vulnerability 9.3 CVE-2012-3990 Resource Management Errors vulnerability 10.0 CVE-2012-3991 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-3992 Permissions, Privileges, and Access Controls vulnerability 5.8 CVE-2012-3993 Design Error vulnerability 9.3 CVE-2012-3994 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-3995 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4179 Resource Management Errors vulnerability 10.0 CVE-2012-4180 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4181 Resource Management Errors vulnerability 10.0 CVE-2012-4182 Resource Management Errors vulnerability 10.0 CVE-2012-4183 Resource Management Errors vulnerability 10.0 CVE-2012-4184 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-4185 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4186 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4187 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4188 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4192 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-4193 Design Error vulnerability 9.3 CVE-2012-4194 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-4195 Permissions, Privileges, and Access Controls vulnerability 5.1 CVE-2012-4196 Permissions, Privileges, and Access Controls vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page. Note: Solaris 10 patches SPARC: 145080-13 X86: 145081-12 contain the fix for all CVEs between Firefox version 10.0.7 and 10.0.12.

    Read the article

  • For an ORM supporting data validation, should constraints be enforced in the database as well?

    - by Ramnique Singh
    I have always applied constraints at the database level in addition to my (ActiveRecord) models. But I've been wondering if this is really required? A little background I recently had to unit test a basic automated timestamp generation method for a model. Normally, the test would create an instance of the model and save it without validation. But there are other required fields that aren't nullable at the in the table definition, meaning I cant save the instance even if I skip the ActiveRecord validation. So I'm thinking if I should remove such constraints from the db itself, and let the ORM handle them? Possible advantages if I skip constraints in db, imo - Can modify a validation rule in the model, without having to migrate the database. Can skip validation in testing. Possible disadvantage? If its possible that ORM validation fails or is bypassed, howsoever, the database does not check for constraints. What do you think? EDIT In this case, I'm using the Yii Framework, which generates the model from the database, hence database rules are generated also (though I could always write them post-generation myself too).

    Read the article

  • SOA: Simplifying Cloud, Mobile, and On-premise Integration–Webcast October 24th 2013

    - by JuergenKress
    Proliferation of mobile devices, data explosion, and cloud enablement has caused a dramatic shift in IT. Organizations need to rethink their application infrastructures to accommodate increased processing speeds, heightened security and availability concerns for their applications, all while meeting lowered total cost of ownership. Traditional infrastructures may not be sufficient to accommodate the diversity and complexity of integrations in this new era. Many of today’s IT organizations rely on a Service Oriented Architecture (SOA) backbone to keep their businesses running. SOA adoption and acceptance across industries have led to platform maturity at the application layer level. However, we are at the start of an era where there is a new modus operandi for organizations to thrive and deliver continuously on competitive differentiation. This change is a result of market globalization, explosion in the number of mobile devices, unparalleled growth in voluminous data and innovation that crosses organizational boundaries. Social, mobile, cloud are terms that are revolutionizing the way organizations operate. Oracle SOA Suite is a hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA).Oracle SOA transforms complex application integration into agile and reusable service-based connectivity by mediating, routing, and managing interactions between services and applications in the enterprise and in the cloud. Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets. Join us on this webcast to find out how you can optimize the use of Oracle SOA Suite, simplifying integration, and what does the next generation of SOA has to offer to you. Agenda: What's new in Oracle SOA Simplifying integration Application Integration and SOA Cloud integration with SOA Mobile Integration leveraging Oracle SOA Suite Oracle Delivers on Next Generation SOA Customer Examples Summary and Q&A Webcast Thursday October 24th, 2013 10am CET (8am UTC / 11am EEST)Details at the Registration Page SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cloud integration,mobile integration,training,webcast middeware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • Database Insider - December 2012 issue

    - by Javier Puerta
    The December issue of the Database Insider newsletter is now available. (Full newsletter here) Big Data: From Acquisition to Analysis 2012 will likely be remembered as the year of big data, as a new generation of technologies enables organizations to acquire, organize, and analyze the exponentially growing and typically less-structured data generated from a variety of new sources. Oracle has produced a series of five short videos that offer a quick and compelling high-level introduction to big data. Read More Total Cost of Ownership Comparison: Oracle Exadata vs. IBM P-Series Read the research that found that over three years, the IBM hardware running Oracle Database cost 31 percent more in total cost of ownership than Oracle Exadata. Webcast - Oracle Exadata Database Machine X3 Learn about Oracle’s next-generation database machine, Oracle Exadata X3, that combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow.    Maximum Availability with Oracle GoldenGate Discover how to eliminate not only unplanned downtime but also planned downtime resulting from database upgrades, migrations, and consolidation.Thursday, December 1319:00 CET / 6 pm. UK   

    Read the article

  • Best way to generate pieces in match-3 games, and then tracking them?

    - by JonLim
    I've been working on a match-3 style game in Actionscript using Flixel, and so far, I've been able to build the core mechanics of the game, including board generation, piece generation, piece swapping and movement, and checking algorithms. However, I am now running into issues with clearing out pieces and letting the above pieces fall down and generating new pieces. The reason I'm running into these issues is that when all of the pieces are generated, the pertinent values (position, sprite ID, and sprite object) are pushed into an array that helps me track everything, all the time. When pieces are moved, I swap the values of the corresponding arrays and life goes on. And that array is the core of my problem: if a row in the middle of the board clears out, ideally, all of the pieces above the cleared pieces should fall down to take their place and new pieces are generated at the top and also fall into place. Except if I try to do that now, all the pieces can fall down, but then I'd have to bump all of their values into the right arrays (oh god my head) and then generate new pieces and fit THOSE into the correct place in the array. Am I overthinking this? Or is there a far better way to track these pieces? Thanks guys!

    Read the article

  • MySQL December Webinars

    - by Bertrand Matthelié
    We'll be running 3 webinars next week and hope many of you will be able to join us: MySQL Replication: Simplifying Scaling and HA with GTIDs Wednesday, December 12, at 15.00 Central European TimeJoin the MySQL replication developers for a deep dive into the design and implementation of Global Transaction Identifiers (GTIDs) and how they enable users to simplify MySQL scaling and HA. GTIDs are one of the most significant new replication capabilities in MySQL 5.6, making it simple to track and compare replication progress between the master and slave servers. Register Now MySQL 5.6: Building the Next Generation of Web/Cloud/SaaS/Embedded Applications and Services Thursday, December 13, at 9.00 am Pacific Time As the world's most popular web database, MySQL has quickly become the leading cloud database, with most providers offering MySQL-based services. Indeed, built to deliver web-based applications and to scale out, MySQL's architecture and features make the database a great fit to deliver cloud-based applications. In this webinar we will focus on the improvements in MySQL 5.6 performance, scalability, and availability designed to enable DBA and developer agility in building the next generation of web-based applications. Register Now Getting the Best MySQL Performance in Your Products: Part IV, Partitioning Friday, December 14, at 9.00 am Pacific Time We're adding Partitioning to our extremely popular "Getting the Best MySQL Performance in Your Products" webinar series. Partitioning can greatly increase the performance of your queries, especially when doing full table scans over large tables. Partitioning is also an excellent way to manage very large tables. It's one of the best ways to build higher performance into your product's embedded or bundled MySQL, and particularly for hardware-constrained appliances and devices. Register Now We have live Q&A during all webinars so you'll get the opportunity to ask your questions!

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • What Counts for a DBA: Skill

    - by drsql
    “Practice makes perfect:” right? Well, not exactly. The reality of it all is that this saying is an untrustworthy aphorism. I discovered this in my “younger” days when I was a passionate tennis player, practicing and playing 20+ hours a week. No matter what my passion level was, without some serious coaching (and perhaps a change in dietary habits), my skill level was never going to rise to a level where I could make any money at the sport that involved something other than selling tennis balls at a sporting goods store. My game may have improved with all that practice but I had too many bad practices to overcome. Practice by itself merely reinforces what we know and what we can figure out naturally. The truth is actually closer to the expression used by Vince Lombardi: “Perfect practice makes perfect.” So how do you get to become skilled as a DBA if practice alone isn’t sufficient? Hit the Internet and start searching for SQL training and you can find 100 different sites. There are also hundreds of blogs, magazines, books, conferences both onsite and virtual. But then how do you know who is good? Unfortunately often the worst guide can be to find out the experience level of the writer. Some of the best DBAs are frighteningly young, and some got their start back when databases were stored on stacks of paper with little holes in it. As a programmer, is it really so hard to understand normalization? Set based theory? Query optimization? Indexing and performance tuning? The biggest barrier often is previous knowledge, particularly programming skills cultivated before you get started with SQL. In the world of technology, it is pretty rare that a fresh programmer will gravitate to database programming. Database programming is very unsexy work, because without a UI all you have are a bunch of text strings that you could never impress anyone with. Newbies spend most of their time building UIs or apps with procedural code in C# or VB scoring obvious interesting wins. Making matters worse is that SQL programming requires mastery of a much different toolset than most any mainstream programming skill. Instead of controlling everything yourself, most of the really difficult work is done by the internals of the engine (written by other non-relational programmers…we just can’t get away from them.) So is there a golden road to achieving a high skill level? Sadly, with tennis, I am pretty sure I’ll never discover it. However, with programming it seems to boil down to practice in applying the appropriate techniques for whatever type of programming you are doing. Can a C# programmer build a great database? As long as they don’t treat SQL like C#, absolutely. Same goes for a DBA writing C# code. None of this stuff is rocket science, as long as you learn to understand that different types of programming require different skill sets and you as a programmer must recognize the difference between one of the procedural languages and SQL and treat them differently. Skill comes from practicing doing things the right way and making “right” a habit.

    Read the article

  • How can i handle a form submit using REAL OOP in PHP

    - by Lorence
    Im used to java and creating UML.. and i was wondering how can PHP be OOP, the objects live only until you make a request.. then they destroy, so if im using a database is useless to create a class and add the members (variables) to the class, they will be useless.. i cant pass the main system object from one page to another, or similar so how can PHP be compare to jave? you never do OOP .. i mean REAL OOP.. not creating classes , in fact your index will be a procedural file with some object instance and then ? how about if i make a html form and i want to submit the data.. i have to call a file which is not a class is a php procedural file were i grab the submited data with POST, from that file you will instance a class and do some logic there.. but for me thats not pure OOP.. can somebody point me to the right way of OOP using a form submit example ? Thanks!

    Read the article

  • Learning Haskell maps, folds, loops and recursion

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • Help! I'm a Haskell Newbie

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

  • Understanding Basic Prototyping & Updating Key/Value pairs

    - by JordanD
    First time poster, long time lurker. I'm trying to learn some more advanced features of .js, and have two ojectives based on the pasted code below: I would like to add methods to a parent class in a specific way (by invoking prototype). I intend to update the declared key/value pairs each time I make an associated method call. execMTAction as seen in TheSuper will execute each function call, regardless. This is by design. Here is the code: function TheSuper(){ this.options = {componentType: "UITabBar", componentName: "Visual Browser", componentMethod: "select", componentValue: null}; execMTAction(this.options.componentType, this.options.componentName, this.options.componentMethod, this.options.componentValue); }; TheSuper.prototype.tapUITextView = function(val1, val2){ this.options = {componentType: "UITextView", componentName: val1, componentMethod: "entertext", componentValue: val2}; }; I would like to execute something like this (very simple): theSuper.executeMTAction(); theSuper.tapUITextView("a", "b"); Unfortunately I am unable to overwrite the "this.options" in the parent, and the .tapUITextView method throws an error saying it cannot find executeMTAction. All I want to do, like I said, is to update the parameters in the parent, then have executeMTAction run each time I make any method call. That's it. Any thoughts? I understand this is basic but I'm coming from a long-time procedural career and .js seems to have this weird confluence of oo/procedural that I'm having a bit of difficulty with. Thanks for any input!

    Read the article

  • Oracle OpenWorld Preview: Real World Perspectives from Oracle WebCenter Customers

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you frequent the Oracle WebCenter blog you’ve probably read a lot about the customer experience revolution over the last few months.  An important aspect of the customer experience revolution is the increasing role that peers play in influencing how others perceive a product, brand or solution, simply by sharing their own, real-world experiences.  Think about it, who do you trust more -- marketers and sales people pitching polished messages or peers with similar roles and similar challenges to the ones you face in your business every day? With this spirit in mind, this polished marketer personally invites you to hear directly from Oracle WebCenter customers about their real-life experiences during our customer panel sessions at Oracle OpenWorld next week.  If you’re currently using WebCenter, thinking about it, or just want to find out more about best practices in social business, next-generation portals, enterprise content management or web experience management, be sure to attend these sessions: CON8899 - Becoming a Social Business: Stories from the Front Lines of Change Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West - 3000Priscilla Hancock - Vice President/CIO, University of Louisville Kellie Christensen - Director of Information Technology, Banner EngineeringWhat does it really mean to be a social business? How can you change your organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. This session will take a thought-provoking look at becoming a social business from the inside. CON8900 - Building Next-Generation Portals: An Interactive Customer Panel DiscussionWednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West - 3000Roberts Wayne - Director, IT, Canadian Partnership Against CancerMike Beattie - VP Application Development, Aramark Uniform ServicesJohn Chen - Utilities Services Manager 6, Los Angeles Department of Water & PowerJörg Modlmayr - Head of Product Managment, Siemens AGSocial and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM NirvanaThursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West - 3001Stephen Madsen - Senior Management Consultant, Alberta Agriculture and Rural DevelopmentHimanshu Parikh - Sr. Director, Enterprise Architecture & Middleware, Ross Stores, Inc.Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we're not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. CON8897 - Using Web Experience Management to Drive Online Marketing SuccessThursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West - 3001Blane Nelson - Chief Architect, Ancestry.comMike Remedios - CIO, ArbonneCaitlin Scanlon - Product Manager, Monster WorldwideEvery year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from customers on how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. Your Handy Guide to WebCenter at Oracle OpenWorld Want a quick and easy guide to all the keynotes, demos, hands-on labs and WebCenter sessions you definitely don't want to miss at Oracle OpenWorld? Download this handy guide, Focus on WebCenter. More helpful links: * Oracle OpenWorld* Oracle Customer Experience Summit @ OpenWorld* Oracle OpenWorld on Facebook * Oracle OpenWorld on Twitter* Oracle OpenWorld on LinkedIn* Oracle OpenWorld Blog

    Read the article

  • EBS Techstack Sessions at OAUG/Collaborate 2010

    - by Steven Chan
    We have a large contingent of E-Business Suite Applications Technology Group staff rolling out to the OAUG/Collaborate 2010 conference in Las Vegas new week.  Our Applications Technology Group staff will be appearing as guest speakers or full-speakers at the following E-Business Suite technology stack related sessions:Database Special Interest GroupSunday, April 18, 11:00 AM, Breakers FSIG Leaders:  Michael Brown, Colibri; Sandra Vucinic, Vlad GroupGuest Speaker:  Steven ChanCovering database upcoming and past desupport dates, and database support policies as they apply to E-Business Suite environments, general Q&A E-Business Suite Technology Stack Special Interest GroupSunday, April 18, 3:00 PM, Breakers FSIG Leaders:  Elke Phelps, Paul Jackson, HumanaGuest Speaker:  Steven ChanCovering the latest EBS technology stack certifications, roadmap, desupport noticesupgrade options for Discoverer, OID, SSO, Portal, general Q&A E-Business Suite Applications Technology Roadmap & VisionMonday, April 19, 8:00 AM, South Seas GOracle Speaker:  Uma PrabhalaLatest developments for SOA, AOL, OAF, Web ADI, SES, AMP, ACMP, security, and other technologies Oracle E-Business Suite Applications Strategy and General Manager UpdateMonday, April 19, 2:30 PM, Mandalay Bay Ballroom DOracle Speaker:  Cliff GodwinUpdate on the entire Oracle E-Business Suite product line. The session covers the value delivered by the current release of Oracle E-Business Suite applications, the momentum, and how Oracle E-Business Suite applications integrate into Oracle's overall applications strategy 10 Things You Can Do Today to Prepare for the Next Generation ApplicationsTuesday, April 20, 8:00 AM, South Seas FOracle Speaker:  Nadia Bendjedou"Common sense" and "practical" steps that can be taken today to increase the value of your Oracle Applications (E-Business Suite, PeopleSoft, Siebel, and JDE) investments by using the latest Oracle solutions and technologiesReducing TCO using Oracle E-Business Suite Management PacksTuesday, April 20, 10:30 AM, South Seas EOracle Speaker:  Angelo RosadoLearn how you can reduce the Total Cost of Ownership by implementing Application Management Pack (AMP) and Application Change Management Pack (ACP) for E-Business Suite 11i, R12, R12.1. AMP is Oracle's next generation system manageability product offering that provides a centralized platform to manage and maintain EBS. ACP is Oracle's offering to monitor and manage E-Business Suite changes in the areas of E-Business Suite Customizations, Patches and Functional Setups. E-Business Suite Upgrade Special Interest GroupTuesday, April 20, 3:15 PM, South Seas ESIG Leaders:  John Stouffer; Sandra Vucinic, Vlad GroupGuest Speaker:  Steven ChanParticipating in general Q&A E-Business Suite Technology Essentials: Using the Latest Oracle Technologies with E-Business Suite Wednesday, April 21, 8:00 AM, South Seas HOracle Speaker:  Lisa ParekhOracle continues to build new functionality into the Oracle Database, Fusion Middleware, and Enterprise Manager. Come see how you can enhance the value of E-Business Suite for your users and lower your costs of ownership by utilizing the latest features of these Oracle technologies with E-Business Suite. Learn about the latest advanced E-Business Suite topologies and features, including new options for security, performance, third-party integration, SOA, virtualization, clouds, systems management, and much more How to Leverage the New E-Business Suite R12.1 Solutions Without Upgrading your 11.5.10 EnvironmentWednesday, April 21, 10:30 AMOracle Speaker:  Nadia Bendjedou, South Seas ELearn how you can use the latest E-Business Suite 12.1 standalone solutions without upgrading from your E-Business Suite 11.5.10 environment Web 2.0 User Experience and Oracle Fusion Middleware Integration with Oracle E-Business SuiteWednesday, April 21, 4:00 PM, South Seas FOracle Speaker:  Padmaprabodh AmbaleSee the next generation Oracle E-Business Suite OA framework improvements that will provide new rich interactions in components such as LOV, Tables and Attachments.  See  new components like the Rich Container that allows any Web 2.0 content like Flash or OBIEE to be embedded in OA Framework pages. Advanced Technology Deployment Architectures for E-Business Suite Wednesday, April 21, 2:15 PM, South Seas EOracle Speaker:  Steven ChanLearn how to take advantage of the latest version of Oracle Fusion Middleware with Oracle E-Business Suite. Learn how to utilize identity management systems and LDAP directories. In addition, come to this session for answers about advanced network deployments involving reverse proxy servers, load balancers, and DMZ's, and to see how you can take benefit from virtualization and new system management capabilities. Upgrading to Oracle E-Business Suite 12.1 - Best PracticesThursday, April 22, 11:00 AM, South Seas EOracle Speaker:  Lester Gutierrez, Udayan ParvateFundamental of upgrading to Release 12.1, which includes the technology stack components and differences, the upgrade path from various releases of Oracle E-Business Suite, upgrade steps, monitoring the upgrade, hints and tips for minimizing downtime and upgrade best practices for making the upgrade to Release 12.1 a success.  We look forward to seeing you there!

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • SPARC M7 Chip - 32 cores - Mind Blowing performance

    - by Angelo-Oracle
    The M7 Chip Oracle just announced its Next Generation Processor at the HotChips HC26 conference. As the Tech Lead in our Systems Division's Partner group, I had a front row seat to the extraordinary price performance advantage of Oracle current T5 and M6 based systems. Partner after partner tested  these systems and were impressed with it performance. Just read some of the quotes to see what our partner has been saying about our hardware. We just announced our next generation processor, the M7. This has 32 cores (up from 16-cores in T5 and 12-cores in M6). With 20 nm technology  this is our most advanced processor. The processor has more cores than anything else in the industry today. After the Sun acquisition Oracle has released 5 processors in 4 years and this is the 6th.  The S4 core  The M7 is built using the foundation of the S4 core. This is the next generation core technology. Like its predecessor, the S4 has 8 dynamic threads. It increases the frequency while maintaining the Pipeline depth. Each core has its own fine grain power estimator that keeps the core within its power envelop in 250 nano-sec granularity. Each core also includes Software in Silicon features for Application Acceleration Support. Each core includes features to improve Application Data Integrity, with almost no performance loss. The core also allows using part of the Virtual Address to store meta-data.  User-Level Synchronization Instructions are also part of the S4 core. Each core has 16 KB Instruction and 16 KB Data L1 cache. The Core Clusters  The cores on the M7 chip are organized in sets of 4-core clusters. The core clusters share  L2 cache.  All four cores in the complex share 256 KB of 4 way set associative L2 Instruction Cache, with over 1/2 TB/s of throughput. Two cores share 256 KB of 8 way set associative L2 Data Cache, with over 1/2 TB/s of throughput. With this innovative Core Cluster architecture, the M7 doubles core execution bandwidth. to maximize per-thread performance.  The Chip  Each  M7 chip has 8 sets of these core-clusters. The chip has 64 MB on-chip L3 cache. This L3 caches is shared among all the cores and is partitioned into 8 x 8 MB chunks. Each chunk is  8-way set associative cache. The aggregate bandwidth for the L3 cache on the chip is over 1.6TB/s. Each chip has 4 DDR4 memory controllers and can support upto 16 DDR4 DIMMs, allowing for 2 TB of RAM/chip. The chip also includes 4 internal links of PCIe Gen3 I/O controllers.  Each chip has 7 coherence links, allowing for 8 of these chips to be connected together gluelessly. Also 32 of these chips can be connected in an SMP configuration. A potential system with 32 chips will have 1024 cores and 8192 threads and 64 TB of RAM.  Software in Silicon The M7 chip has many built in Application Accelerators in Silicon. These features will be exposed to our Software partners using the SPARC Accelerator Program.  The M7  has built-in logic to decompress data at the speed of memory access. This means that applications can directly work on compressed data in memory increasing the data access rates. The VA Masking feature allows the use of part of the virtual address to store meta-data.  Realtime Application Data Integrity The Realtime Application Data Integrity feature helps applications safeguard against invalid, stale memory reference and buffer overflows. The first 4-bits if the Pointer can be used to store a version number and this version number is also maintained in the memory & cache lines. When a pointer accesses memory the hardware checks to make sure the two versions match. A SEGV signal is raised when there is a mismatch. This feature can be used by the Database, applications and the OS.  M7 Database In-Memory Query Accelerator The M7 chip also includes a In-Silicon Query Engines.  These accelerate tasks that work on In-Memory Columnar Vectors. Oracle In-Memory options stores data in Column Format. The M7 Query Engine can speed up In-Memory Format Conversion, Value and Range Comparisons and Set Membership lookups. This engine can work on Compressed data - this means not only are we accelerating the query performance but also increasing the memory bandwidth for queries.  SPARC Accelerated Program  At the Hotchips conference we also introduced the SPARC Accelerated Program to provide our partners and third part developers access to all the goodness of the M7's SPARC Application Acceleration features. Please get in touch with us if you are interested in knowing more about this program. 

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • Creating ASP.NET MVC Negotiated Content Results

    - by Rick Strahl
    In a recent ASP.NET MVC application I’m involved with, we had a late in the process request to handle Content Negotiation: Returning output based on the HTTP Accept header of the incoming HTTP request. This is standard behavior in ASP.NET Web API but ASP.NET MVC doesn’t support this functionality directly out of the box. Another reason this came up in discussion is last week’s announcements of ASP.NET vNext, which seems to indicate that ASP.NET Web API is not going to be ported to the cloud version of vNext, but rather be replaced by a combined version of MVC and Web API. While it’s not clear what new API features will show up in this new framework, it’s pretty clear that the ASP.NET MVC style syntax will be the new standard for all the new combined HTTP processing framework. Why negotiated Content? Content negotiation is one of the key features of Web API even though it’s such a relatively simple thing. But it’s also something that’s missing in MVC and once you get used to automatically having your content returned based on Accept headers it’s hard to go back to manually having to create separate methods for different output types as you’ve had to with Microsoft server technologies all along (yes, yes I know other frameworks – including my own – have done this for years but for in the box features this is relatively new from Web API). As a quick review,  Accept Header content negotiation works off the request’s HTTP Accept header:POST http://localhost/mydailydosha/Editable/NegotiateContent HTTP/1.1 Content-Type: application/json Accept: application/json Host: localhost Content-Length: 76 Pragma: no-cache { ElementId: "header", PageName: "TestPage", Text: "This is a nice header" } If I make this request I would expect to get back a JSON result based on my application/json Accept header. To request XML  I‘d just change the accept header:Accept: text/xml and now I’d expect the response to come back as XML. Now this only works with media types that the server can process. In my case here I need to handle JSON, XML, HTML (using Views) and Plain Text. HTML results might need more than just a data return – you also probably need to specify a View to render the data into either by specifying the view explicitly or by using some sort of convention that can automatically locate a view to match. Today ASP.NET MVC doesn’t support this sort of automatic content switching out of the box. Unfortunately, in my application scenario we have an application that started out primarily with an AJAX backend that was implemented with JSON only. So there are lots of JSON results like this:[Route("Customers")] public ActionResult GetCustomers() { return Json(repo.GetCustomers(),JsonRequestBehavior.AllowGet); } These work fine, but they are of course JSON specific. Then a couple of weeks ago, a requirement came in that an old desktop application needs to also consume this API and it has to use XML to do it because there’s no JSON parser available for it. Ooops – stuck with JSON in this case. While it would have been easy to add XML specific methods I figured it’s easier to add basic content negotiation. And that’s what I show in this post. Missteps – IResultFilter, IActionFilter My first attempt at this was to use IResultFilter or IActionFilter which look like they would be ideal to modify result content after it’s been generated using OnResultExecuted() or OnActionExecuted(). Filters are great because they can look globally at all controller methods or individual methods that are marked up with the Filter’s attribute. But it turns out these filters don’t work for raw POCO result values from Action methods. What we wanted to do for API calls is get back to using plain .NET types as results rather than result actions. That is  you write a method that doesn’t return an ActionResult, but a standard .NET type like this:public Customer UpdateCustomer(Customer cust) { … do stuff to customer :-) return cust; } Unfortunately both OnResultExecuted and OnActionExecuted receive an MVC ContentResult instance from the POCO object. MVC basically takes any non-ActionResult return value and turns it into a ContentResult by converting the value using .ToString(). Ugh. The ContentResult itself doesn’t contain the original value, which is lost AFAIK with no way to retrieve it. So there’s no way to access the raw customer object in the example above. Bummer. Creating a NegotiatedResult This leaves mucking around with custom ActionResults. ActionResults are MVC’s standard way to return action method results – you basically specify that you would like to render your result in a specific format. Common ActionResults are ViewResults (ie. View(vn,model)), JsonResult, RedirectResult etc. They work and are fairly effective and work fairly well for testing as well as it’s the ‘standard’ interface to return results from actions. The problem with the this is mainly that you’re explicitly saying that you want a specific result output type. This works well for many things, but sometimes you do want your result to be negotiated. My first crack at this solution here is to create a simple ActionResult subclass that looks at the Accept header and based on that writes the output. I need to support JSON and XML content and HTML as well as text – so effectively 4 media types: application/json, text/xml, text/html and text/plain. Everything else is passed through as ContentResult – which effecively returns whatever .ToString() returns. Here’s what the NegotiatedResult usage looks like:public ActionResult GetCustomers() { return new NegotiatedResult(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return new NegotiatedResult("Show", repo.GetCustomer(id)); } There are two overloads of this method – one that returns just the raw result value and a second version that accepts an optional view name. The second version returns the Razor view specified only if text/html is requested – otherwise the raw data is returned. This is useful in applications where you have an HTML front end that can also double as an API interface endpoint that’s using the same model data you send to the View. For the application I mentioned above this was another actual use-case we needed to address so this was a welcome side effect of creating a custom ActionResult. There’s also an extension method that directly attaches a Negotiated() method to the controller using the same syntax:public ActionResult GetCustomers() { return this.Negotiated(repo.GetCustomers()); } public ActionResult GetCustomer(int id) { return this.Negotiated("Show",repo.GetCustomer(id)); } Using either of these mechanisms now allows you to return JSON, XML, HTML or plain text results depending on the Accept header sent. Send application/json you get just the Customer JSON data. Ditto for text/xml and XML data. Pass text/html for the Accept header and the "Show.cshtml" Razor view is rendered passing the result model data producing final HTML output. While this isn’t as clean as passing just POCO objects back as I had intended originally, this approach fits better with how MVC action methods are intended to be used and we get the bonus of being able to specify a View to render (optionally) for HTML. How does it work An ActionResult implementation is pretty straightforward. You inherit from ActionResult and implement the ExecuteResult method to send your output to the ASP.NET output stream. ActionFilters are an easy way to effectively do post processing on ASP.NET MVC controller actions just before the content is sent to the output stream, assuming your specific action result was used. Here’s the full code to the NegotiatedResult class (you can also check it out on GitHub):/// <summary> /// Returns a content negotiated result based on the Accept header. /// Minimal implementation that works with JSON and XML content, /// can also optionally return a view with HTML. /// </summary> /// <example> /// // model data only /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult(repo.Customers.OrderBy( c=> c.Company) ) /// } /// // optional view for HTML /// public ActionResult GetCustomers() /// { /// return new NegotiatedResult("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public class NegotiatedResult : ActionResult { /// <summary> /// Data stored to be 'serialized'. Public /// so it's potentially accessible in filters. /// </summary> public object Data { get; set; } /// <summary> /// Optional name of the HTML view to be rendered /// for HTML responses /// </summary> public string ViewName { get; set; } public static bool FormatOutput { get; set; } static NegotiatedResult() { FormatOutput = HttpContext.Current.IsDebuggingEnabled; } /// <summary> /// Pass in data to serialize /// </summary> /// <param name="data">Data to serialize</param> public NegotiatedResult(object data) { Data = data; } /// <summary> /// Pass in data and an optional view for HTML views /// </summary> /// <param name="data"></param> /// <param name="viewName"></param> public NegotiatedResult(string viewName, object data) { Data = data; ViewName = viewName; } public override void ExecuteResult(ControllerContext context) { if (context == null) throw new ArgumentNullException("context"); HttpResponseBase response = context.HttpContext.Response; HttpRequestBase request = context.HttpContext.Request; // Look for specific content types if (request.AcceptTypes.Contains("text/html")) { response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); } else if (request.AcceptTypes.Contains("text/plain")) { response.ContentType = "text/plain"; response.Write(Data); } else if (request.AcceptTypes.Contains("application/json")) { using (JsonTextWriter writer = new JsonTextWriter(response.Output)) { var settings = new JsonSerializerSettings(); if (FormatOutput) settings.Formatting = Newtonsoft.Json.Formatting.Indented; JsonSerializer serializer = JsonSerializer.Create(settings); serializer.Serialize(writer, Data); writer.Flush(); } } else if (request.AcceptTypes.Contains("text/xml")) { response.ContentType = "text/xml"; if (Data != null) { using (var writer = new XmlTextWriter(response.OutputStream, new UTF8Encoding())) { if (FormatOutput) writer.Formatting = System.Xml.Formatting.Indented; XmlSerializer serializer = new XmlSerializer(Data.GetType()); serializer.Serialize(writer, Data); writer.Flush(); } } } else { // just write data as a plain string response.Write(Data); } } } /// <summary> /// Extends Controller with Negotiated() ActionResult that does /// basic content negotiation based on the Accept header. /// </summary> public static class NegotiatedResultExtensions { /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated( repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, object data) { return new NegotiatedResult(data); } /// <summary> /// Return content-negotiated content of the data based on Accept header. /// Supports: /// application/json - using JSON.NET /// text/xml - Xml as XmlSerializer XML /// text/html - as text, or an optional View /// text/plain - as text /// </summary> /// <param name="controller"></param> /// <param name="viewName">Name of the View to when Accept is text/html</param> /// /// <param name="data">Data to return</param> /// <returns>serialized data</returns> /// <example> /// public ActionResult GetCustomers() /// { /// return this.Negotiated("List", repo.Customers.OrderBy( c=> c.Company) ) /// } /// </example> public static NegotiatedResult Negotiated(this Controller controller, string viewName, object data) { return new NegotiatedResult(viewName, data); } } Output Generation – JSON and XML Generating output for XML and JSON is simple – you use the desired serializer and off you go. Using XmlSerializer and JSON.NET it’s just a handful of lines each to generate serialized output directly into the HTTP output stream. Please note this implementation uses JSON.NET for its JSON generation rather than the default JavaScriptSerializer that MVC uses which I feel is an additional bonus to implementing this custom action. I’d already been using a custom JsonNetResult class previously, but now this is just rolled into this custom ActionResult. Just keep in mind that JSON.NET outputs slightly different JSON for certain things like collections for example, so behavior may change. One addition to this implementation might be a flag to allow switching the JSON serializer. Html View Generation Html View generation actually turned out to be easier than anticipated. Initially I used my generic ASP.NET ViewRenderer Class that can render MVC views from any ASP.NET application. However it turns out since we are executing inside of an active MVC request there’s an easier way: We can simply create a custom ViewResult and populate its members and then execute it. The code in text/html handling code that renders the view is simply this:response.ContentType = "text/html"; if (!string.IsNullOrEmpty(ViewName)) { var viewData = context.Controller.ViewData; viewData.Model = Data; var viewResult = new ViewResult { ViewName = ViewName, MasterName = null, ViewData = viewData, TempData = context.Controller.TempData, ViewEngineCollection = ((Controller)context.Controller).ViewEngineCollection }; viewResult.ExecuteResult(context.Controller.ControllerContext); } else response.Write(Data); which is a neat and easy way to render a Razor view assuming you have an active controller that’s ready for rendering. Sweet – dependency removed which makes this class self-contained without any external dependencies other than JSON.NET. Summary While this isn’t exactly a new topic, it’s the first time I’ve actually delved into this with MVC. I’ve been doing content negotiation with Web API and prior to that with my REST library. This is the first time it’s come up as an issue in MVC. But as I have worked through this I find that having a way to specify both HTML Views *and* JSON and XML results from a single controller certainly is appealing to me in many situations as we are in this particular application returning identical data models for each of these operations. Rendering content negotiated views is something that I hope ASP.NET vNext will provide natively in the combined MVC and WebAPI model, but we’ll see how this actually will be implemented. In the meantime having a custom ActionResult that provides this functionality is a workable and easily adaptable way of handling this going forward. Whatever ends up happening in ASP.NET vNext the abstraction can probably be changed to support the native features of the future. Anyway I hope some of you found this useful if not for direct integration then as insight into some of the rendering logic that MVC uses to get output into the HTTP stream… Related Resources Latest Version of NegotiatedResult.cs on GitHub Understanding Action Controllers Rendering ASP.NET Views To String© Rick Strahl, West Wind Technologies, 2005-2014Posted in MVC  ASP.NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Friday, March 09, 2012

    CodePlex Daily Summary for Friday, March 09, 2012Popular ReleasesSSH.NET Library: 2012.3.9: There are still few outstanding issues I wanted to include in this release but since its been a while and there are few new features already I decided to create a new release now. New Features Add SOCKS4, SOCKS5 and HTTP Proxy support when connecting to remote server. For silverlight only IP address can be used for server address when using proxy. Add dynamic port forwarding support using ForwardedPortDynamic class. Add new ShellStream class to work with SSH Shell. Add supports for mu...fnr.exe - Find And Replace Tool: 1.0: You can read all about the new features here: Here is the Summary Preview Matches Stats File errors for read/write Support for regular expressions Fixed a bug that required you to press enter to continue after running fnr.exe from command line Context menu to display containing folder or open the file Double click on results row to open the file (similar to double clicking in windows explorer) Binary detection – skip files that are binaryTest Case Import Utilities for Visual Studio 2010 and Visual Studio 11 Beta: V1.2 RTM: This release (V1.2 RTM) includes: Support for connecting to Hosted Team Foundation Server Preview. Support for connecting to Team Foundation Server 11 Beta. Fix to issue with read-only attribute being set for LinksMapping-ReportFile which may have led to problems when saving the report file. Fix to issue with “related links” not being set properly in certain conditions. Fix to ensure that tool works fine when the Excel file contained rich text data. Note: Data is still imported in pl...Audio Pitch & Shift: Audio Pitch And Shift 3.5.0: Modules (mod, xm, it, etc..) supportcallisto: callisto 2.0.19: BUG FIX: Autorun.load() function in scripting now has sandboxed path (Thanks Mikey!) BUG FIX: UserObject.Name property now allows full 20 byte string replacements. FEATURE REQUEST: File.* script functions now allow file extensions.DotNetNuke® Community Edition CMS: 06.01.04: Major Highlights Fixed issue with loading the splash page skin in the login, privacy and terms of use pages Fixed issue when searching for words with special characters in them Fixed redirection issue when the user does not have permissions to access a resource Fixed issue when clearing the cache using the ClearHostCache() function Fixed issue when displaying the site structure in the link to page feature Fixed issue when inline editing the title of modules Fixed issue with ...Mayhem: Mayhem Developer Preview: This is the developer preview of Mayhem. Enjoy!Magelia WebStore Open-source Ecommerce software: Magelia WebStore 1.2: Medium trust compliant lot of small change for medium trust compliance full refactoring of user management refactoring of Client Refactoring of user management Magelia.WebStore.Client no longer reference Magelia.WebStore.Services.Contract Refactoring page category multi parent category added copy category feature added Refactoring page catalog copy catalog feature added variant management improvement ability to define a default variant for a variable product ability to ord...Delta Engine: Delta Engine Beta Preview v0.9.4: v0.9.4 is the release for February 2012, but it was delayed till 2012-03-07 until content generation worked much better for v0.9.4. The main improvements were done on the server side (content generation and improved build support for iOS and Android). v0.9.4 is also the first version everyone can use to deploy their application onto all supported platforms, see Marketplace Licensing for details: http://deltaengine.net/Marketplace Documentation for this version can be found at: http://help.de...PDFsharp - A .NET library for processing PDF: PDFsharp and MigraDoc Foundation 1.32: PDFsharp and MigraDoc Foundation 1.32 is a stable version that fixes a few bugs that were found with version 1.31. Version 1.32 includes solutions for Visual Studio 2010 only (but it should be possible to add the project files to existing solutions for VS 2005 or VS 2008). Users of VS 2005 or VS 2008 can still download version 1.31 with the solutions for those versions that allow them to easily try the samples that are included. While it may create smaller PDF files than version 1.30 because...Terminals: Version 2.0 - Release: Changes since version 1.9a:New art works New usability in Organize favorites window Improved usability of imports/exports and scans Large number of fixes Improvements in single instance mode Comparing November beta 4, this corrects: New application icons Doesn't show Logon error codes Fixed command line arguments exception for single instance mode Fixed detaching of tabs improved usability in detached window Fixed option settings for Capture manager Fixed system tray noti...AutoLoL: AutoLoL v2.1.5: Updated version of Autolol that works with the Fiora patch.MFCMAPI: March 2012 Release: Build: 15.0.0.1032 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...SQL Scriptz Runner: Application: Scriptz Runner source code and applicationPowerGUI Visual Studio Extension: PowerGUI VSX 1.5.2: Added support for PowerGUI 3.2.VidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.Google Books Downloader for Windows: Google Books Downloader: Google Books Downloader 1.8ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...AcDown????? - Anime&Comic Downloader: AcDown????? v3.9.1: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...New ProjectsAngry Birds in 1 Hour: This is a simple "Angry Birds" clone on Windows Phone 7 written in just 1 hour.ascent: ascent capture a capture productascentexpress: ascentexpressASP.NET MVVM Excalibur: ASP.NET MVVM Excalibur Project.this is Web Form base, has a new Binding Expression like WPF MVVM.Azure Virtual Directory: A program (or windows service) that registers a virtual directory on your local machine that is actually a gateway into an Azure Blob service. This will allow you to browse, create, modify and delete files directly in Windows Explorer, through a command prompt, or by any software that would be able to do so (as if it was writing to the local machine). This is not a directory that backs-up to Azure, but is rather *only* on Azure. Developed in C#.ClipFlair: ClipFlair - Foreign Language Learning through Interactive Revoicing and Captioning of ClipsCloudSpotter: CloudSpotter is a Windows Azure sample application that can be used for demo purposes or for learning the basic concepts of cloud application development. CloudSpotter makes it possible to convert, webcam based, cloud pictures to time-lapse video footage. Composing Wcf: Basic library providing a service host and service behavior capable of utilizing MEF for runtime composition of WCF SOAP and REST web services. Library provides composing Hosts and Host Factories for standard ServiceHost types, as well as WebServiceHost (RESTful).convert digit to word upto thousand: convert digit to word upto thousandDAL Generator using Database Application Block 5 and T4 Template: T4 template code for generating data base layer for normal CRUD operation using Repository Pattern. Database application block 5 features are used for generating database call and automatic mapping with DTOeuler 12 problem: euler 12 problemeuler 14 problem: euler 14 problemeuler 19 problem: euler 19 problemeuler 28: euler 28euler 30: euler 30euler 36 problem: euler 36 problemeuler 45: euler 45 problemeuler 52 problem: euler 52 problemeuler21: euler 21euler22: euler 22 problemeuler23: euler 23euler29: euler 29 problemeVet: eVet is a guidance project based on the fictional scenario of a Veterinary System used to monitor pets' medical history. It will be based on Azure and leverage the Worker role and SQL Azure datase to illustrate a multi-tenant cloud-based Pet management system. The ORM layer will be NHibernate and it will be based on the repository design pattern. If you want to help and learn Azure at the same time, I am looking for: - Designers (CSS3, HTML 5, Javascript) - Web Developers (ASP.Net ...Firemap: Generates a html page which displays key performance statistics of chosen computers. FolderHiderNet: FolderHiderNet, its a simple application developed in C#, that let users easily hide and unhide folder on their windows systems. It could be used in USB dispositives.Game of Life for Windows Phone: This is an XNA implementation of Conway's Game of Life for Windows Phone. The game is a grid of cells that live and die based on a simple set of rules. The player can arrange the live and dead cells, and start/stop the cell generation to see how the cells are interrelated. Features: - Save and load games - Start and stop generations - Adjust generation speed - Clear grid - Generate random grid - Sample shapes preloaded as saved games For more information on Conway's Game of Life...Gamoliyas: Gamoliyas is an open source John Conway's Game of Life game totally written in DHTML (JavaScript, CSS and HTML). Uses mouse and keyboard. Very configurable. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.Gembed: Transform url into Embed code using javascript. It is developed using jQuery, jQuery templates and javascript. Any contribution would be really apreciated.GIFT: gift appImageLoader iOS: ImageLoader is developed on iOS and it can be used in iPhone and iPad. It try to make application to support image downloading and cache easily. It downloads the image file from url, depended on ASIHttpRequest. And it cache the images into local file.MakkysStackOverflow: Learning how to build stackoverflow like site mysimpleproject: This is my test projectNWN Hak Merging Utility: Mostly automated Hak Merging utility for NWN .hak files.Oasis Text: Oasis Text is a simple, free text editor for Windows. It is written in C# and built with the ScintillaNET editing component. It is a work in progress and is free and open source software. Opds4Net: A .NET Library for Open Publication Distribution System (OPDS) Catalog protocol, a syndication format for electronic publications based on Atom. This project is created to simplify the process of creating an OPDS Catalog in .NET and standardize the result OPDS with least effort. Pratiques: Endroit pour gérer les Pratiques.scooby: This is a scooby dooby doo projectStaffKey: Study Project Projet d'étude Permet le lancement d'un serveur web sur une clé usb.SugataTools: SugataTools are the helper classes that I usually use in my projects.testtom03082012hg05: testtom03082012hg05testtom03082012tfs01: testtom03082012tfs01testtom03082012tfs02: testtom03082012tfs02TNTSerializer: A simple serializer which -Is faster than any other serializer -Does not require ISeriablable - Uses generic cached Reflection wrappers (FAST) -Should serialize ANY structure, no questions asked, no special markup required. -Can handle common attributes -handles optional parameteWholemy.LinkedLists: Wholemy Linked Lists realizationsworkApp: workAppWPF Yahoo Stock API: WPF application using PRISM & MVVM to display stock details using Yahoo API (YPL)

    Read the article

  • apache2 doesn't start with location

    - by Geod24
    I have a small domain, which I use only for personal purposes. I'm the main user, and have at most 3-4 users at the same time. I use apache2 with passenger to serve redmine. So I start with an empty apache2: root@xxxxx:/home/# service apache2 start [ ok ] Starting web server: apache2. root@xxxxx:/home/# a2dissite Your choices are: Which site(s) do you want to disable (wildcards ok)? Then enable my site, and restart (not reload) apache2: root@xxxxx:/home/# a2ensite 200-redmine Enabling site 200-redmine. To activate the new configuration, you need to run: service apache2 reload root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] The apache2 instance did not start within 20 seconds. Please read the log files to discover problems ... (warning). root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] There are processes named 'apache2' running which do not match your pid file which are left untouched in the name of safety, Please review the situation by hand. ... (warning). root@xxxxx:/home/# pidof apache2 20948 Here's my 200-redmine.conf: PerlLoadModule Apache::Redmine <VirtualHost *:80> ServerName redmine.xxxxx.xxx DocumentRoot /var/www/redmine/public/ ErrorLog ${APACHE_LOG_DIR}/redmine.error.log CustomLog ${APACHE_LOG_DIR}/redmine.access.log common MaxRequestLen 20971520 <Directory "/var/www/redmine/public/"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> SetEnv GIT_PROJECT_ROOT /opt/git/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend/ <Location /git> PerlAuthenHandler Apache::Authn::Redmine::authen_handler PerlAccessHandler Apache::Authn::Redmine::access_handler AuthType Basic Require valid-user AuthName "Redmine Git Repository" RedmineDSN "DBI:mysql:database=redmine;host=localhost:3306" RedmineDbUser "redmine" RedmineDbPass "password" RedmineCacheCredsMax 50 </Location> </VirtualHost> Now if I comment out the ScriptAlias / stuff, it works ! In addition, starting the server with 200-redmine disabled, then enabling it works. But apache2 will die randomly. Plus the location doesn't work. The logs show nothing: root@xxxxx:/home/# ll /var/log/apache2/ total 8 drwxr-xr-x 2 root root 4096 Oct 30 07:52 coredump -rw-r--r-- 1 root root 0 Nov 4 02:39 default.access.log -rw-r--r-- 1 root root 2356 Nov 4 02:39 default.error.log -rw-r--r-- 1 root root 0 Nov 4 02:39 other_vhosts_access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.error.log root@xxxxx:/home/# ll /var/log/apache2/coredump/ total 0 root@xxxxx:/home/# cat /var/log/apache2/default.error.log [ 2013-11-04 02:39:36.0130 21471/7fcf090f4740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21470', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.0255 21474/7f9a99fda740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/request [ 2013-11-04 02:39:36.0507 21479/7f8316b0f740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/logging [ 2013-11-04 02:39:36.0511 21471/7fcf090f4740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! [ 2013-11-04 02:39:36.3158 21495/7fba6f686740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21491', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.3304 21498/7f0106d9b740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/request [ 2013-11-04 02:39:36.3522 21503/7f92ad392740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/logging [ 2013-11-04 02:39:36.3525 21495/7fba6f686740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! And at last: root@xxxxx:/home/# apache2ctl -t -D DUMP_VHOSTS VirtualHost configuration: *:80 is a NameVirtualHost default server redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) root@xxxxx:/home/# uname -a Linux xxxx.xxx 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux root@xxxxx:/home/# dpkg --list | grep apache2 ii apache2 2.4.6-3 amd64 Apache HTTP Server ii apache2-bin 2.4.6-3 amd64 Apache HTTP Server (binary files and modules) ii apache2-data 2.4.6-3 all Apache HTTP Server (common files) ii apache2-utils 2.4.6-3 amd64 Apache HTTP Server (utility programs for web servers) ii libapache2-mod-fcgid 1:2.3.9-1 amd64 FastCGI interface module for Apache 2 ii libapache2-mod-passenger 4.0.10-1 amd64 Rails and Rack support for Apache2 ii libapache2-mod-perl2 2.0.8+httpd24-r1449661-6+b1 amd64 Integration of perl with the Apache2 web server ii libapache2-mod-perl2-dev 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - development files ii libapache2-mod-perl2-doc 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - documentation ii libapache2-mod-proxy-html 1:2.4.6-3 amd64 Transitional package for apache2-bin ii libapache2-mod-svn 1.7.13-2 amd64 Apache Subversion server modules for Apache httpd ii libapache2-reload-perl 0.12-2 all module for reloading Perl modules when changed on disk ii libapache2-svn 1.7.13-2 all Apache Subversion server modules for Apache httpd (dummy package) root@xxxxx:/home/# a2dismod Your choices are: access_compat alias auth_basic authn_core authn_file authz_core authz_host authz_svn authz_user autoindex dav dav_svn deflate dir env fcgid filter mime mpm_event negotiation passenger perl proxy proxy_http rewrite setenvif status Which module(s) do you want to disable (wildcards ok)?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >